Test Report: Hyper-V_Windows 18551

                    
                      1118682035abaed82942a21ae2e13e14d2fd3192:2024-04-01:33835
                    
                

Test fail (32/146)

Order failed test Duration
38 TestAddons/parallel/Registry 77
64 TestErrorSpam/setup 204.33
75 TestFunctional/serial/SoftStart 347.05
77 TestFunctional/serial/KubectlGetPods 180.74
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 11.57
85 TestFunctional/serial/CacheCmd/cache/cache_reload 179.67
87 TestFunctional/serial/MinikubeKubectlCmd 180.72
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 180.68
89 TestFunctional/serial/ExtraConfig 301.14
90 TestFunctional/serial/ComponentHealth 180.81
91 TestFunctional/serial/LogsCmd 28.34
92 TestFunctional/parallel 0
101 TestMultiControlPlane/serial/PingHostFromPods 72.38
105 TestMultiControlPlane/serial/CopyFile 598.9
159 TestMultiNode/serial/FreshStart2Nodes 234.39
160 TestMultiNode/serial/DeployApp2Nodes 108.32
161 TestMultiNode/serial/PingHostFrom2Pods 12.9
162 TestMultiNode/serial/AddNode 20.09
163 TestMultiNode/serial/MultiNodeLabels 12.66
164 TestMultiNode/serial/ProfileList 24.87
165 TestMultiNode/serial/CopyFile 25.26
166 TestMultiNode/serial/StopNode 25.63
167 TestMultiNode/serial/StartAfterStop 83.09
168 TestMultiNode/serial/RestartKeepsNodes 240.29
169 TestMultiNode/serial/DeleteNode 33.55
170 TestMultiNode/serial/StopMultiNode 89.73
171 TestMultiNode/serial/RestartMultiNode 234.48
172 TestMultiNode/serial/ValidateNameConflict 517.76
176 TestPreload 596.19
184 TestKubernetesUpgrade 10800.558
196 TestPause/serial/Start 390.55
197 TestNoKubernetes/serial/StartWithK8s 299.89
x
+
TestAddons/parallel/Registry (77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 24.9868ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kg9pg" [b1418252-c10f-4107-b496-1d57938f8905] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0359319s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-75p9z" [d1990ac6-2eb1-49cc-9568-fe08ad36d51e] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0133567s
addons_test.go:340: (dbg) Run:  kubectl --context addons-852800 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-852800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-852800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.131184s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 ip: (2.9055787s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0401 10:28:38.971272    8864 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-852800 ip"
2024/04/01 10:28:41 [DEBUG] GET http://172.19.148.231:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable registry --alsologtostderr -v=1: (16.7028011s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-852800 -n addons-852800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-852800 -n addons-852800: (14.5037816s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 logs -n 25: (11.9000189s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-452300 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC |                     |
	|         | -p download-only-452300                                                                     |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |                |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC | 01 Apr 24 10:20 UTC |
	| delete  | -p download-only-452300                                                                     | download-only-452300 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC | 01 Apr 24 10:21 UTC |
	| start   | -o=json --download-only                                                                     | download-only-134000 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | -p download-only-134000                                                                     |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                                                |                      |                   |                |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| delete  | -p download-only-134000                                                                     | download-only-134000 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| start   | -o=json --download-only                                                                     | download-only-373700 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | -p download-only-373700                                                                     |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                                         |                      |                   |                |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| delete  | -p download-only-373700                                                                     | download-only-373700 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| delete  | -p download-only-452300                                                                     | download-only-452300 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| delete  | -p download-only-134000                                                                     | download-only-134000 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| delete  | -p download-only-373700                                                                     | download-only-373700 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-729600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | binary-mirror-729600                                                                        |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |                |                     |                     |
	|         | http://127.0.0.1:49987                                                                      |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | -p binary-mirror-729600                                                                     | binary-mirror-729600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| start   | -p addons-852800 --wait=true                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |                |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |                |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	|         | disable metrics-server                                                                      |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| ssh     | addons-852800 ssh cat                                                                       | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	|         | /opt/local-path-provisioner/pvc-772810e3-66c1-4b28-81a8-0348debb99f1_default_test-pvc/file1 |                      |                   |                |                     |                     |
	| ip      | addons-852800 ip                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |                |                     |                     |
	|         | -v=1                                                                                        |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:29 UTC |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC |                     |
	|         | -p addons-852800                                                                            |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:21:44
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:21:44.790156   12520 out.go:291] Setting OutFile to fd 876 ...
	I0401 10:21:44.791227   12520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:21:44.791227   12520 out.go:304] Setting ErrFile to fd 880...
	I0401 10:21:44.791227   12520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:21:44.815055   12520 out.go:298] Setting JSON to false
	I0401 10:21:44.817520   12520 start.go:129] hostinfo: {"hostname":"minikube6","uptime":309663,"bootTime":1711657241,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:21:44.817520   12520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:21:44.839645   12520 out.go:177] * [addons-852800] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:21:44.846418   12520 notify.go:220] Checking for updates...
	I0401 10:21:44.852367   12520 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:21:44.855299   12520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:21:44.857824   12520 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:21:44.860468   12520 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:21:44.863223   12520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:21:44.865973   12520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:21:50.567764   12520 out.go:177] * Using the hyperv driver based on user configuration
	I0401 10:21:50.571490   12520 start.go:297] selected driver: hyperv
	I0401 10:21:50.571689   12520 start.go:901] validating driver "hyperv" against <nil>
	I0401 10:21:50.571716   12520 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:21:50.626576   12520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 10:21:50.628047   12520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 10:21:50.628198   12520 cni.go:84] Creating CNI manager for ""
	I0401 10:21:50.628308   12520 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:21:50.628338   12520 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 10:21:50.628338   12520 start.go:340] cluster config:
	{Name:addons-852800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-852800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:21:50.628338   12520 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:21:50.632175   12520 out.go:177] * Starting "addons-852800" primary control-plane node in "addons-852800" cluster
	I0401 10:21:50.637098   12520 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:21:50.638216   12520 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 10:21:50.638216   12520 cache.go:56] Caching tarball of preloaded images
	I0401 10:21:50.638216   12520 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 10:21:50.638216   12520 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 10:21:50.639291   12520 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\config.json ...
	I0401 10:21:50.639674   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\config.json: {Name:mk023331e42eb9d4f32d4269c82a00d50ffbed43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:21:50.639937   12520 start.go:360] acquireMachinesLock for addons-852800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 10:21:50.639937   12520 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-852800"
	I0401 10:21:50.641102   12520 start.go:93] Provisioning new machine with config: &{Name:addons-852800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:addons-852800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 10:21:50.641152   12520 start.go:125] createHost starting for "" (driver="hyperv")
	I0401 10:21:50.644492   12520 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0401 10:21:50.645445   12520 start.go:159] libmachine.API.Create for "addons-852800" (driver="hyperv")
	I0401 10:21:50.645445   12520 client.go:168] LocalClient.Create starting
	I0401 10:21:50.645784   12520 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 10:21:50.905543   12520 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 10:21:51.300266   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 10:21:53.566916   12520 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 10:21:53.566916   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:21:53.567006   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 10:21:55.431659   12520 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 10:21:55.431740   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:21:55.431909   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 10:21:56.952933   12520 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 10:21:56.952933   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:21:56.953640   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 10:22:00.884332   12520 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 10:22:00.884478   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:00.887437   12520 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 10:22:01.426151   12520 main.go:141] libmachine: Creating SSH key...
	I0401 10:22:01.554458   12520 main.go:141] libmachine: Creating VM...
	I0401 10:22:01.555430   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 10:22:04.509565   12520 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 10:22:04.510384   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:04.510492   12520 main.go:141] libmachine: Using switch "Default Switch"
	I0401 10:22:04.510582   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 10:22:06.394788   12520 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 10:22:06.395747   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:06.395747   12520 main.go:141] libmachine: Creating VHD
	I0401 10:22:06.395747   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 10:22:10.299191   12520 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 10754524-81AC-4871-B719-6BBA8FE0DA57
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 10:22:10.299476   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:10.299476   12520 main.go:141] libmachine: Writing magic tar header
	I0401 10:22:10.299907   12520 main.go:141] libmachine: Writing SSH key tar header
	I0401 10:22:10.308860   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 10:22:13.582622   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:13.582805   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:13.582805   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\disk.vhd' -SizeBytes 20000MB
	I0401 10:22:16.196657   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:16.196657   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:16.196657   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-852800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0401 10:22:20.043872   12520 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-852800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 10:22:20.044239   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:20.044239   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-852800 -DynamicMemoryEnabled $false
	I0401 10:22:22.334607   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:22.334607   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:22.334690   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-852800 -Count 2
	I0401 10:22:24.540745   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:24.540745   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:24.540745   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-852800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\boot2docker.iso'
	I0401 10:22:27.184647   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:27.184647   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:27.185548   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-852800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\disk.vhd'
	I0401 10:22:29.947914   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:29.947914   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:29.947914   12520 main.go:141] libmachine: Starting VM...
	I0401 10:22:29.947914   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-852800
	I0401 10:22:33.101540   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:33.101781   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:33.101781   12520 main.go:141] libmachine: Waiting for host to start...
	I0401 10:22:33.101781   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:22:35.479933   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:22:35.480050   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:35.480102   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:22:38.084173   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:38.084173   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:39.088587   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:22:41.363145   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:22:41.363145   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:41.363320   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:22:43.991235   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:43.991235   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:44.999324   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:22:47.268656   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:22:47.268939   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:47.269025   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:22:49.878180   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:49.878180   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:50.889247   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:22:53.194495   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:22:53.194495   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:53.195566   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:22:55.806689   12520 main.go:141] libmachine: [stdout =====>] : 
	I0401 10:22:55.806689   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:56.812475   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:22:59.116764   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:22:59.117451   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:22:59.117560   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:01.805908   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:01.806099   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:01.806219   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:04.002863   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:04.002863   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:04.002863   12520 machine.go:94] provisionDockerMachine start ...
	I0401 10:23:04.003920   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:06.269892   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:06.269892   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:06.269963   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:08.935746   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:08.935746   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:08.943010   12520 main.go:141] libmachine: Using SSH client type: native
	I0401 10:23:08.952535   12520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.148.231 22 <nil> <nil>}
	I0401 10:23:08.952535   12520 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 10:23:09.087314   12520 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 10:23:09.087314   12520 buildroot.go:166] provisioning hostname "addons-852800"
	I0401 10:23:09.087314   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:11.270888   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:11.270888   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:11.270888   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:13.859224   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:13.859224   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:13.865138   12520 main.go:141] libmachine: Using SSH client type: native
	I0401 10:23:13.865867   12520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.148.231 22 <nil> <nil>}
	I0401 10:23:13.865867   12520 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-852800 && echo "addons-852800" | sudo tee /etc/hostname
	I0401 10:23:14.027164   12520 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-852800
	
	I0401 10:23:14.027308   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:16.202722   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:16.202722   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:16.203022   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:18.856213   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:18.856213   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:18.863848   12520 main.go:141] libmachine: Using SSH client type: native
	I0401 10:23:18.864540   12520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.148.231 22 <nil> <nil>}
	I0401 10:23:18.864540   12520 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-852800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-852800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-852800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 10:23:19.017990   12520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:23:19.017990   12520 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 10:23:19.017990   12520 buildroot.go:174] setting up certificates
	I0401 10:23:19.017990   12520 provision.go:84] configureAuth start
	I0401 10:23:19.017990   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:21.172949   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:21.173616   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:21.173616   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:23.752289   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:23.752846   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:23.753035   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:25.915813   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:25.916394   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:25.916537   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:28.478896   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:28.478896   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:28.478972   12520 provision.go:143] copyHostCerts
	I0401 10:23:28.479552   12520 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 10:23:28.481192   12520 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 10:23:28.482562   12520 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 10:23:28.483647   12520 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-852800 san=[127.0.0.1 172.19.148.231 addons-852800 localhost minikube]
	I0401 10:23:28.603545   12520 provision.go:177] copyRemoteCerts
	I0401 10:23:28.616542   12520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 10:23:28.616542   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:30.761764   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:30.762171   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:30.762171   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:33.376218   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:33.376487   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:33.376583   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:23:33.489257   12520 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8726113s)
	I0401 10:23:33.490110   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 10:23:33.536297   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 10:23:33.585755   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 10:23:33.640106   12520 provision.go:87] duration metric: took 14.6220143s to configureAuth
	I0401 10:23:33.640106   12520 buildroot.go:189] setting minikube options for container-runtime
	I0401 10:23:33.640106   12520 config.go:182] Loaded profile config "addons-852800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:23:33.640106   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:35.917700   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:35.917700   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:35.917700   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:38.552354   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:38.553034   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:38.558715   12520 main.go:141] libmachine: Using SSH client type: native
	I0401 10:23:38.558929   12520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.148.231 22 <nil> <nil>}
	I0401 10:23:38.558929   12520 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 10:23:38.700040   12520 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 10:23:38.700040   12520 buildroot.go:70] root file system type: tmpfs
	I0401 10:23:38.700808   12520 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 10:23:38.700979   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:40.871119   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:40.871119   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:40.871749   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:43.435095   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:43.435175   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:43.441391   12520 main.go:141] libmachine: Using SSH client type: native
	I0401 10:23:43.442302   12520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.148.231 22 <nil> <nil>}
	I0401 10:23:43.442302   12520 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 10:23:43.597424   12520 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 10:23:43.597424   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:45.763711   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:45.763793   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:45.763870   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:48.371608   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:48.371608   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:48.377081   12520 main.go:141] libmachine: Using SSH client type: native
	I0401 10:23:48.377700   12520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.148.231 22 <nil> <nil>}
	I0401 10:23:48.377793   12520 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 10:23:50.556762   12520 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 10:23:50.556762   12520 machine.go:97] duration metric: took 46.5535726s to provisionDockerMachine
	I0401 10:23:50.556762   12520 client.go:171] duration metric: took 1m59.9104772s to LocalClient.Create
	I0401 10:23:50.556762   12520 start.go:167] duration metric: took 1m59.9104772s to libmachine.API.Create "addons-852800"
	I0401 10:23:50.556762   12520 start.go:293] postStartSetup for "addons-852800" (driver="hyperv")
	I0401 10:23:50.556762   12520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 10:23:50.568722   12520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 10:23:50.568722   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:52.718228   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:52.718406   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:52.718537   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:23:55.354657   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:23:55.354941   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:55.355795   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:23:55.473953   12520 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9051215s)
	I0401 10:23:55.486040   12520 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 10:23:55.493545   12520 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 10:23:55.493662   12520 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 10:23:55.494025   12520 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 10:23:55.494025   12520 start.go:296] duration metric: took 4.9372281s for postStartSetup
	I0401 10:23:55.497878   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:23:57.716597   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:23:57.717185   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:23:57.717330   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:24:00.325272   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:24:00.325272   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:00.326309   12520 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\config.json ...
	I0401 10:24:00.328798   12520 start.go:128] duration metric: took 2m9.6867385s to createHost
	I0401 10:24:00.329726   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:24:02.546310   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:24:02.546719   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:02.547005   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:24:05.193025   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:24:05.193025   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:05.199267   12520 main.go:141] libmachine: Using SSH client type: native
	I0401 10:24:05.199919   12520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.148.231 22 <nil> <nil>}
	I0401 10:24:05.199996   12520 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 10:24:05.330571   12520 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711967045.331939474
	
	I0401 10:24:05.330571   12520 fix.go:216] guest clock: 1711967045.331939474
	I0401 10:24:05.330571   12520 fix.go:229] Guest: 2024-04-01 10:24:05.331939474 +0000 UTC Remote: 2024-04-01 10:24:00.3296282 +0000 UTC m=+135.719727001 (delta=5.002311274s)
	I0401 10:24:05.330571   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:24:07.565375   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:24:07.565741   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:07.565816   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:24:10.260235   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:24:10.260235   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:10.266196   12520 main.go:141] libmachine: Using SSH client type: native
	I0401 10:24:10.266845   12520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.148.231 22 <nil> <nil>}
	I0401 10:24:10.266910   12520 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711967045
	I0401 10:24:10.421062   12520 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 10:24:05 UTC 2024
	
	I0401 10:24:10.421062   12520 fix.go:236] clock set: Mon Apr  1 10:24:05 UTC 2024
	 (err=<nil>)
	I0401 10:24:10.421062   12520 start.go:83] releasing machines lock for "addons-852800", held for 2m19.7801468s
	I0401 10:24:10.421280   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:24:12.601597   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:24:12.601694   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:12.601755   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:24:15.241840   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:24:15.241840   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:15.246284   12520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 10:24:15.246474   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:24:15.259120   12520 ssh_runner.go:195] Run: cat /version.json
	I0401 10:24:15.259120   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:24:17.529990   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:24:17.529990   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:17.530663   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:24:17.548241   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:24:17.548241   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:17.548241   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:24:20.243416   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:24:20.243416   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:20.243762   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:24:20.270777   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:24:20.270777   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:24:20.271408   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:24:20.398909   12520 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1524732s)
	I0401 10:24:20.398909   12520 ssh_runner.go:235] Completed: cat /version.json: (5.1397528s)
	I0401 10:24:20.411651   12520 ssh_runner.go:195] Run: systemctl --version
	I0401 10:24:20.433657   12520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 10:24:20.443212   12520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 10:24:20.455899   12520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 10:24:20.485808   12520 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 10:24:20.485808   12520 start.go:494] detecting cgroup driver to use...
	I0401 10:24:20.485808   12520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:24:20.534800   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 10:24:20.568144   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 10:24:20.589315   12520 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 10:24:20.602590   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 10:24:20.636480   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:24:20.671035   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 10:24:20.703628   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:24:20.736404   12520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 10:24:20.768262   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 10:24:20.802739   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 10:24:20.839044   12520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 10:24:20.872607   12520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 10:24:20.905569   12520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 10:24:20.943933   12520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:24:21.145705   12520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 10:24:21.180969   12520 start.go:494] detecting cgroup driver to use...
	I0401 10:24:21.194283   12520 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 10:24:21.229048   12520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:24:21.264127   12520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 10:24:21.314618   12520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:24:21.352892   12520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 10:24:21.391201   12520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 10:24:21.456087   12520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 10:24:21.481067   12520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:24:21.527874   12520 ssh_runner.go:195] Run: which cri-dockerd
	I0401 10:24:21.545105   12520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 10:24:21.563812   12520 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 10:24:21.606148   12520 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 10:24:21.810375   12520 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 10:24:22.003719   12520 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 10:24:22.003719   12520 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 10:24:22.048476   12520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:24:22.254780   12520 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 10:24:24.769061   12520 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5136178s)
	I0401 10:24:24.783494   12520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 10:24:24.821591   12520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 10:24:24.872630   12520 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 10:24:25.093948   12520 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 10:24:25.317498   12520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:24:25.536548   12520 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 10:24:25.579927   12520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 10:24:25.625336   12520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:24:25.843576   12520 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 10:24:25.952437   12520 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 10:24:25.966664   12520 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 10:24:25.979075   12520 start.go:562] Will wait 60s for crictl version
	I0401 10:24:25.991039   12520 ssh_runner.go:195] Run: which crictl
	I0401 10:24:26.009269   12520 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 10:24:26.092196   12520 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 10:24:26.103010   12520 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 10:24:26.146533   12520 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 10:24:26.184491   12520 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 10:24:26.184738   12520 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 10:24:26.189391   12520 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 10:24:26.189391   12520 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 10:24:26.189391   12520 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 10:24:26.189391   12520 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 10:24:26.191511   12520 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 10:24:26.191511   12520 ip.go:210] interface addr: 172.19.144.1/20
	I0401 10:24:26.205673   12520 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 10:24:26.210702   12520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 10:24:26.233804   12520 kubeadm.go:877] updating cluster {Name:addons-852800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:addons-852800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.148.231 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 10:24:26.234101   12520 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:24:26.243428   12520 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 10:24:26.263578   12520 docker.go:685] Got preloaded images: 
	I0401 10:24:26.263636   12520 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0401 10:24:26.277453   12520 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 10:24:26.314133   12520 ssh_runner.go:195] Run: which lz4
	I0401 10:24:26.333295   12520 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 10:24:26.339367   12520 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 10:24:26.339367   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0401 10:24:28.259778   12520 docker.go:649] duration metric: took 1.9387263s to copy over tarball
	I0401 10:24:28.272718   12520 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 10:24:34.706410   12520 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.4336464s)
	I0401 10:24:34.706410   12520 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 10:24:34.785799   12520 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 10:24:34.809044   12520 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0401 10:24:35.646404   12520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:24:35.882317   12520 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 10:24:40.578463   12520 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.6960615s)
	I0401 10:24:40.589517   12520 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 10:24:40.616899   12520 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0401 10:24:40.616969   12520 cache_images.go:84] Images are preloaded, skipping loading
	I0401 10:24:40.617127   12520 kubeadm.go:928] updating node { 172.19.148.231 8443 v1.29.3 docker true true} ...
	I0401 10:24:40.617127   12520 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-852800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.148.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-852800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 10:24:40.628023   12520 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0401 10:24:40.661335   12520 cni.go:84] Creating CNI manager for ""
	I0401 10:24:40.661335   12520 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:24:40.661335   12520 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 10:24:40.661335   12520 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.148.231 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-852800 NodeName:addons-852800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.148.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.148.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 10:24:40.662147   12520 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.148.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-852800"
	  kubeletExtraArgs:
	    node-ip: 172.19.148.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.148.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 10:24:40.674584   12520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 10:24:40.693711   12520 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 10:24:40.710045   12520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 10:24:40.729582   12520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0401 10:24:40.763727   12520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 10:24:40.797055   12520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0401 10:24:40.842352   12520 ssh_runner.go:195] Run: grep 172.19.148.231	control-plane.minikube.internal$ /etc/hosts
	I0401 10:24:40.847538   12520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.148.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 10:24:40.881499   12520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:24:41.091699   12520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 10:24:41.125602   12520 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800 for IP: 172.19.148.231
	I0401 10:24:41.125602   12520 certs.go:194] generating shared ca certs ...
	I0401 10:24:41.125804   12520 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:41.126237   12520 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 10:24:41.444935   12520 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0401 10:24:41.444935   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:41.446837   12520 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0401 10:24:41.446837   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:41.447077   12520 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 10:24:42.004101   12520 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0401 10:24:42.004101   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:42.004931   12520 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0401 10:24:42.004931   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:42.006641   12520 certs.go:256] generating profile certs ...
	I0401 10:24:42.007079   12520 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.key
	I0401 10:24:42.007587   12520 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt with IP's: []
	I0401 10:24:42.173416   12520 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt ...
	I0401 10:24:42.173416   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: {Name:mk91fb194d63765d8b9e6d58d80608fcf661a775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:42.175375   12520 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.key ...
	I0401 10:24:42.175375   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.key: {Name:mkd112adfddc053032eea313b581dd13ab91d2f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:42.175927   12520 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.key.cba881c7
	I0401 10:24:42.177009   12520 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt.cba881c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.148.231]
	I0401 10:24:42.307004   12520 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt.cba881c7 ...
	I0401 10:24:42.307004   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt.cba881c7: {Name:mk1fb92527ee8b9564500e1bc0ebb8b641c0944d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:42.307839   12520 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.key.cba881c7 ...
	I0401 10:24:42.307839   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.key.cba881c7: {Name:mk13bb657b3af41b28005314b085f8b2ba427356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:42.308837   12520 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt.cba881c7 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt
	I0401 10:24:42.321113   12520 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.key.cba881c7 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.key
	I0401 10:24:42.321877   12520 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\proxy-client.key
	I0401 10:24:42.321877   12520 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\proxy-client.crt with IP's: []
	I0401 10:24:42.606969   12520 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\proxy-client.crt ...
	I0401 10:24:42.606969   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\proxy-client.crt: {Name:mk009da4742a2005352644e419cda7d42c9034b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:42.609149   12520 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\proxy-client.key ...
	I0401 10:24:42.609149   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\proxy-client.key: {Name:mk2da5c925d9b4e5e9574e3d16e635a61d4b50da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:24:42.626208   12520 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 10:24:42.627197   12520 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 10:24:42.627197   12520 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 10:24:42.627197   12520 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 10:24:42.629875   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 10:24:42.679336   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 10:24:42.721828   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 10:24:42.763321   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 10:24:42.817442   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 10:24:42.866737   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 10:24:42.913263   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 10:24:42.962200   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 10:24:43.007296   12520 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 10:24:43.053133   12520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 10:24:43.099756   12520 ssh_runner.go:195] Run: openssl version
	I0401 10:24:43.120060   12520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 10:24:43.151584   12520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 10:24:43.158669   12520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 10:24:43.171092   12520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 10:24:43.192870   12520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 10:24:43.226744   12520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 10:24:43.234526   12520 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 10:24:43.235130   12520 kubeadm.go:391] StartCluster: {Name:addons-852800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:addons-852800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.148.231 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:24:43.245128   12520 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0401 10:24:43.281849   12520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 10:24:43.319317   12520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 10:24:43.353132   12520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 10:24:43.372478   12520 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 10:24:43.372544   12520 kubeadm.go:156] found existing configuration files:
	
	I0401 10:24:43.383943   12520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 10:24:43.404948   12520 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 10:24:43.416949   12520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 10:24:43.449666   12520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 10:24:43.467285   12520 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 10:24:43.478644   12520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 10:24:43.507147   12520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 10:24:43.526171   12520 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 10:24:43.538141   12520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 10:24:43.566147   12520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 10:24:43.584630   12520 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 10:24:43.595139   12520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 10:24:43.613790   12520 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 10:24:43.891323   12520 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 10:24:58.499449   12520 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 10:24:58.499625   12520 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 10:24:58.499864   12520 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 10:24:58.500163   12520 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 10:24:58.500545   12520 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 10:24:58.500634   12520 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 10:24:58.503334   12520 out.go:204]   - Generating certificates and keys ...
	I0401 10:24:58.503904   12520 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 10:24:58.504004   12520 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 10:24:58.504004   12520 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 10:24:58.504004   12520 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 10:24:58.504004   12520 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 10:24:58.504624   12520 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 10:24:58.504735   12520 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 10:24:58.504806   12520 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-852800 localhost] and IPs [172.19.148.231 127.0.0.1 ::1]
	I0401 10:24:58.504806   12520 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 10:24:58.505375   12520 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-852800 localhost] and IPs [172.19.148.231 127.0.0.1 ::1]
	I0401 10:24:58.505514   12520 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 10:24:58.505514   12520 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 10:24:58.505514   12520 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 10:24:58.505514   12520 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 10:24:58.505514   12520 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 10:24:58.506210   12520 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 10:24:58.506297   12520 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 10:24:58.506297   12520 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 10:24:58.506297   12520 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 10:24:58.506297   12520 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 10:24:58.506297   12520 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 10:24:58.509216   12520 out.go:204]   - Booting up control plane ...
	I0401 10:24:58.510208   12520 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 10:24:58.510208   12520 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 10:24:58.510208   12520 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 10:24:58.510208   12520 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 10:24:58.510208   12520 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 10:24:58.510208   12520 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 10:24:58.511226   12520 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 10:24:58.511226   12520 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.005366 seconds
	I0401 10:24:58.511226   12520 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 10:24:58.511226   12520 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 10:24:58.511226   12520 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 10:24:58.512209   12520 kubeadm.go:309] [mark-control-plane] Marking the node addons-852800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 10:24:58.512209   12520 kubeadm.go:309] [bootstrap-token] Using token: 3k9w7f.pbpuumeuuogx4r2c
	I0401 10:24:58.515221   12520 out.go:204]   - Configuring RBAC rules ...
	I0401 10:24:58.515221   12520 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 10:24:58.515221   12520 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 10:24:58.516225   12520 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 10:24:58.516225   12520 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 10:24:58.516225   12520 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 10:24:58.516225   12520 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 10:24:58.517227   12520 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 10:24:58.517227   12520 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 10:24:58.517227   12520 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 10:24:58.517227   12520 kubeadm.go:309] 
	I0401 10:24:58.517227   12520 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 10:24:58.517227   12520 kubeadm.go:309] 
	I0401 10:24:58.517227   12520 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 10:24:58.517227   12520 kubeadm.go:309] 
	I0401 10:24:58.517227   12520 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 10:24:58.517227   12520 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 10:24:58.518263   12520 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 10:24:58.518263   12520 kubeadm.go:309] 
	I0401 10:24:58.518263   12520 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 10:24:58.518263   12520 kubeadm.go:309] 
	I0401 10:24:58.518263   12520 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 10:24:58.518263   12520 kubeadm.go:309] 
	I0401 10:24:58.518263   12520 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 10:24:58.518263   12520 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 10:24:58.518263   12520 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 10:24:58.518263   12520 kubeadm.go:309] 
	I0401 10:24:58.519230   12520 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 10:24:58.519230   12520 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 10:24:58.519230   12520 kubeadm.go:309] 
	I0401 10:24:58.519230   12520 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3k9w7f.pbpuumeuuogx4r2c \
	I0401 10:24:58.519230   12520 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c \
	I0401 10:24:58.519230   12520 kubeadm.go:309] 	--control-plane 
	I0401 10:24:58.519230   12520 kubeadm.go:309] 
	I0401 10:24:58.520221   12520 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 10:24:58.520221   12520 kubeadm.go:309] 
	I0401 10:24:58.520221   12520 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3k9w7f.pbpuumeuuogx4r2c \
	I0401 10:24:58.520221   12520 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c 
	I0401 10:24:58.520221   12520 cni.go:84] Creating CNI manager for ""
	I0401 10:24:58.520221   12520 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:24:58.523221   12520 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 10:24:58.537224   12520 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 10:24:58.566843   12520 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 10:24:58.632517   12520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 10:24:58.646837   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-852800 minikube.k8s.io/updated_at=2024_04_01T10_24_58_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=addons-852800 minikube.k8s.io/primary=true
	I0401 10:24:58.646837   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:24:58.661405   12520 ops.go:34] apiserver oom_adj: -16
	I0401 10:24:59.091439   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:24:59.595673   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:00.106651   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:00.591230   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:01.093793   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:01.600839   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:02.102933   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:02.606930   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:03.104595   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:03.593645   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:04.093916   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:04.596414   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:05.101720   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:05.608820   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:06.092710   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:06.597713   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:07.101364   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:07.593726   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:08.096560   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:08.604961   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:09.093925   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:09.602133   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:10.093766   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:10.598137   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:11.105097   12520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:25:11.242673   12520 kubeadm.go:1107] duration metric: took 12.6098821s to wait for elevateKubeSystemPrivileges
	W0401 10:25:11.242858   12520 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 10:25:11.242912   12520 kubeadm.go:393] duration metric: took 28.0076612s to StartCluster
	I0401 10:25:11.242912   12520 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:25:11.242912   12520 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:25:11.244814   12520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:25:11.247714   12520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 10:25:11.247946   12520 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.148.231 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 10:25:11.247946   12520 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0401 10:25:11.249467   12520 config.go:182] Loaded profile config "addons-852800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:25:11.249561   12520 addons.go:69] Setting cloud-spanner=true in profile "addons-852800"
	I0401 10:25:11.249583   12520 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-852800"
	I0401 10:25:11.249583   12520 addons.go:69] Setting yakd=true in profile "addons-852800"
	I0401 10:25:11.249583   12520 addons.go:69] Setting ingress=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting inspektor-gadget=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting registry=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting volumesnapshots=true in profile "addons-852800"
	I0401 10:25:11.249886   12520 addons.go:234] Setting addon registry=true in "addons-852800"
	I0401 10:25:11.256733   12520 out.go:177] * Verifying Kubernetes components...
	I0401 10:25:11.249583   12520 addons.go:69] Setting gcp-auth=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting default-storageclass=true in profile "addons-852800"
	I0401 10:25:11.249583   12520 addons.go:234] Setting addon cloud-spanner=true in "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting helm-tiller=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting ingress-dns=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting metrics-server=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting storage-provisioner=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:234] Setting addon yakd=true in "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:234] Setting addon ingress=true in "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:234] Setting addon inspektor-gadget=true in "addons-852800"
	I0401 10:25:11.249677   12520 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-852800"
	I0401 10:25:11.249886   12520 addons.go:234] Setting addon volumesnapshots=true in "addons-852800"
	I0401 10:25:11.250222   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.256733   12520 mustload.go:65] Loading cluster: addons-852800
	I0401 10:25:11.259717   12520 addons.go:234] Setting addon metrics-server=true in "addons-852800"
	I0401 10:25:11.259717   12520 addons.go:234] Setting addon helm-tiller=true in "addons-852800"
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.259717   12520 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-852800"
	I0401 10:25:11.259717   12520 addons.go:234] Setting addon storage-provisioner=true in "addons-852800"
	I0401 10:25:11.259717   12520 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-852800"
	I0401 10:25:11.259717   12520 addons.go:234] Setting addon ingress-dns=true in "addons-852800"
	I0401 10:25:11.259717   12520 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-852800"
	I0401 10:25:11.259717   12520 config.go:182] Loaded profile config "addons-852800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.260726   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.260726   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.259717   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:11.264726   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.264726   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.264726   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.267719   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.268731   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.268731   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.269732   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.269732   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.269732   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.269732   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.271734   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.271734   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.271734   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.271734   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.271734   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:11.301759   12520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:25:12.399073   12520 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.0973066s)
	I0401 10:25:12.400446   12520 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.1524919s)
	I0401 10:25:12.415137   12520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 10:25:12.433929   12520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 10:25:15.963632   12520 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.5296782s)
	I0401 10:25:15.969631   12520 node_ready.go:35] waiting up to 6m0s for node "addons-852800" to be "Ready" ...
	I0401 10:25:15.969631   12520 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.5544695s)
	I0401 10:25:15.969631   12520 start.go:946] {"host.minikube.internal": 172.19.144.1} host record injected into CoreDNS's ConfigMap
	I0401 10:25:16.290634   12520 node_ready.go:49] node "addons-852800" has status "Ready":"True"
	I0401 10:25:16.290634   12520 node_ready.go:38] duration metric: took 321.0002ms for node "addons-852800" to be "Ready" ...
	I0401 10:25:16.290634   12520 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 10:25:16.411634   12520 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6tmk4" in "kube-system" namespace to be "Ready" ...
	I0401 10:25:16.674256   12520 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-852800" context rescaled to 1 replicas
	I0401 10:25:17.560466   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:17.560466   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:17.594461   12520 addons.go:234] Setting addon default-storageclass=true in "addons-852800"
	I0401 10:25:17.594461   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:17.595455   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:17.786213   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:17.786213   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:17.786213   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:17.933986   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:17.934994   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:17.939992   12520 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0401 10:25:17.944601   12520 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 10:25:17.944724   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0401 10:25:17.944844   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:17.998028   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:17.998357   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.002351   12520 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0401 10:25:18.005074   12520 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0401 10:25:18.005213   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0401 10:25:18.005213   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.004396   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.005330   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.009009   12520 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0401 10:25:18.019227   12520 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 10:25:18.019227   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 10:25:18.019227   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.048894   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.049891   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.054413   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0401 10:25:18.057010   12520 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0401 10:25:18.057010   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0401 10:25:18.057010   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.101146   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.101146   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.105141   12520 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-852800"
	I0401 10:25:18.105141   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:18.109884   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.164391   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.164391   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.170088   12520 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0401 10:25:18.175453   12520 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0401 10:25:18.175453   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0401 10:25:18.175453   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.216585   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.216585   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.221569   12520 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0401 10:25:18.219571   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.226565   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.229556   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0401 10:25:18.231588   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0401 10:25:18.233565   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0401 10:25:18.238683   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0401 10:25:18.238683   12520 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0401 10:25:18.263617   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0401 10:25:18.263617   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.266869   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0401 10:25:18.285191   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0401 10:25:18.301191   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0401 10:25:18.303944   12520 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0401 10:25:18.310527   12520 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0401 10:25:18.310527   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0401 10:25:18.310527   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.347704   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.347704   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.358700   12520 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 10:25:18.361704   12520 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 10:25:18.366824   12520 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0401 10:25:18.371815   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.384928   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.379828   12520 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 10:25:18.380061   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.389929   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.395407   12520 out.go:177]   - Using image docker.io/registry:2.8.3
	I0401 10:25:18.392561   12520 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0401 10:25:18.392561   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0401 10:25:18.395804   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.403994   12520 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0401 10:25:18.403994   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0401 10:25:18.403994   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.403994   12520 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0401 10:25:18.406805   12520 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0401 10:25:18.406805   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0401 10:25:18.407806   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.460753   12520 pod_ready.go:102] pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:18.612756   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.612756   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.628760   12520 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 10:25:18.638804   12520 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 10:25:18.638804   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 10:25:18.638804   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:18.869092   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:18.869092   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:18.886092   12520 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0401 10:25:18.891767   12520 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 10:25:18.891767   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0401 10:25:18.891767   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:21.222758   12520 pod_ready.go:102] pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:23.442797   12520 pod_ready.go:102] pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:23.733226   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:23.733226   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:23.733226   12520 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 10:25:23.733226   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 10:25:23.733226   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:23.809234   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:23.809234   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:23.809234   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:24.055862   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:24.055862   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:24.056859   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:24.134274   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:24.134274   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:24.137288   12520 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0401 10:25:24.144130   12520 out.go:177]   - Using image docker.io/busybox:stable
	I0401 10:25:24.161590   12520 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 10:25:24.161590   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0401 10:25:24.161722   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:24.194104   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:24.194104   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:24.194104   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:24.387630   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:24.387630   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:24.387630   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:24.418471   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:24.418471   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:24.419507   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:24.560069   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:24.560069   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:24.561063   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:24.612425   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:24.612425   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:24.711193   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:24.729673   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:24.729673   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:24.730631   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:25.196746   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:25.196746   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:25.196746   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:25.625340   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:25.625340   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:25.626340   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:26.048557   12520 pod_ready.go:102] pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:26.530945   12520 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0401 10:25:26.531935   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:26.779936   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:26.779936   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:26.779936   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:26.910797   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:26.911297   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:26.911297   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:28.442891   12520 pod_ready.go:102] pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:30.286884   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:30.286884   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:30.286884   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:30.420963   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:30.420963   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:30.420963   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:30.462428   12520 pod_ready.go:102] pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:31.046253   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:31.046253   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:31.047567   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:31.383495   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 10:25:31.606593   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:31.606593   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:31.607599   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:31.723710   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:31.723710   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:31.725108   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:31.794978   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:31.794978   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:31.794978   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:31.881572   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:31.881733   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:31.882693   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:31.961961   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:31.961961   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:31.962802   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:32.030147   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:32.030147   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:32.031214   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:32.036461   12520 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0401 10:25:32.036550   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0401 10:25:32.137226   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:32.137226   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:32.138230   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:32.195482   12520 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0401 10:25:32.195482   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0401 10:25:32.214042   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:32.214042   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:32.215213   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:32.220875   12520 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0401 10:25:32.220989   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0401 10:25:32.241959   12520 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0401 10:25:32.242001   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0401 10:25:32.247366   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:32.247704   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:32.247704   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:32.380346   12520 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0401 10:25:32.380346   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0401 10:25:32.489074   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0401 10:25:32.519804   12520 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0401 10:25:32.519882   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0401 10:25:32.548165   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0401 10:25:32.559472   12520 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0401 10:25:32.559618   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0401 10:25:32.562205   12520 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0401 10:25:32.562205   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0401 10:25:32.585929   12520 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 10:25:32.585929   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0401 10:25:32.748899   12520 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0401 10:25:32.748899   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0401 10:25:32.792037   12520 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0401 10:25:32.792159   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0401 10:25:32.859444   12520 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0401 10:25:32.859580   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0401 10:25:32.869329   12520 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0401 10:25:32.869329   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0401 10:25:32.878317   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:32.879293   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:32.880771   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:32.938753   12520 pod_ready.go:102] pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:32.969248   12520 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 10:25:32.969322   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 10:25:33.000119   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:33.000210   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:33.001174   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:33.043067   12520 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0401 10:25:33.043067   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0401 10:25:33.045934   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:33.046581   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:33.047155   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:33.057842   12520 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0401 10:25:33.057842   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0401 10:25:33.080072   12520 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0401 10:25:33.080254   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0401 10:25:33.083022   12520 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 10:25:33.083194   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0401 10:25:33.164702   12520 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0401 10:25:33.164702   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0401 10:25:33.237955   12520 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 10:25:33.238055   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 10:25:33.269918   12520 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0401 10:25:33.270056   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0401 10:25:33.490119   12520 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0401 10:25:33.490185   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0401 10:25:33.538538   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0401 10:25:33.573043   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 10:25:33.637725   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 10:25:33.686353   12520 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0401 10:25:33.686353   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0401 10:25:33.702951   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 10:25:33.771374   12520 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0401 10:25:33.771374   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0401 10:25:33.784364   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 10:25:33.843230   12520 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0401 10:25:33.843230   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0401 10:25:33.882093   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 10:25:33.952103   12520 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0401 10:25:33.952103   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0401 10:25:33.960604   12520 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0401 10:25:33.960604   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0401 10:25:34.077627   12520 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0401 10:25:34.077627   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0401 10:25:34.112796   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:34.112796   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:34.113474   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:34.230654   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:34.230654   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:34.231377   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:34.234095   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0401 10:25:34.240410   12520 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0401 10:25:34.240410   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0401 10:25:34.443620   12520 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0401 10:25:34.443620   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0401 10:25:34.445616   12520 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0401 10:25:34.445749   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0401 10:25:34.620553   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0401 10:25:34.661115   12520 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0401 10:25:34.661115   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0401 10:25:35.141343   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 10:25:35.468392   12520 pod_ready.go:97] pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:35 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:12 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:12 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:12 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:12 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.19.148.231 HostIPs:[{IP:172.19.148
.231}] PodIP: PodIPs:[] StartTime:2024-04-01 10:25:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-01 10:25:24 +0000 UTC,FinishedAt:2024-04-01 10:25:34 +0000 UTC,ContainerID:docker://079968f7d6dda7a5a522c8ebfc0acd55432236ef87734f16bf1466320b7320df,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://079968f7d6dda7a5a522c8ebfc0acd55432236ef87734f16bf1466320b7320df Started:0xc003b342c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0401 10:25:35.468392   12520 pod_ready.go:81] duration metric: took 19.0566243s for pod "coredns-76f75df574-6tmk4" in "kube-system" namespace to be "Ready" ...
	E0401 10:25:35.468392   12520 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-76f75df574-6tmk4" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:35 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:12 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:12 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:12 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-01 10:25:12 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.19.14
8.231 HostIPs:[{IP:172.19.148.231}] PodIP: PodIPs:[] StartTime:2024-04-01 10:25:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-01 10:25:24 +0000 UTC,FinishedAt:2024-04-01 10:25:34 +0000 UTC,ContainerID:docker://079968f7d6dda7a5a522c8ebfc0acd55432236ef87734f16bf1466320b7320df,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://079968f7d6dda7a5a522c8ebfc0acd55432236ef87734f16bf1466320b7320df Started:0xc003b342c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0401 10:25:35.468392   12520 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zgx2j" in "kube-system" namespace to be "Ready" ...
	I0401 10:25:35.492719   12520 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 10:25:35.492719   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0401 10:25:35.500046   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:35.500046   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:35.500480   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:35.518507   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 10:25:36.570133   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 10:25:37.269459   12520 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0401 10:25:37.488472   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:38.070310   12520 addons.go:234] Setting addon gcp-auth=true in "addons-852800"
	I0401 10:25:38.070310   12520 host.go:66] Checking if "addons-852800" exists ...
	I0401 10:25:38.071313   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:39.615951   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:40.190248   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.6420291s)
	I0401 10:25:40.190248   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.701119s)
	I0401 10:25:40.400396   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.8618091s)
	I0401 10:25:40.400396   12520 addons.go:470] Verifying addon registry=true in "addons-852800"
	I0401 10:25:40.407432   12520 out.go:177] * Verifying registry addon...
	I0401 10:25:40.426507   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:40.432011   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:40.434233   12520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0401 10:25:40.446306   12520 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0401 10:25:40.446306   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0401 10:25:40.590020   12520 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0401 10:25:40.590082   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:40.950262   12520 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0401 10:25:40.950262   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:41.500211   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:41.952029   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:41.984181   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:42.498885   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:42.820794   12520 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:25:42.821212   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:42.821297   12520 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0401 10:25:42.977394   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:43.753454   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:44.170409   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:44.339359   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:44.534526   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:45.007433   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:45.183963   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.6102991s)
	I0401 10:25:45.184119   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.546239s)
	W0401 10:25:45.184046   12520 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 10:25:45.184241   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.4812083s)
	I0401 10:25:45.184311   12520 addons.go:470] Verifying addon metrics-server=true in "addons-852800"
	I0401 10:25:45.184333   12520 retry.go:31] will retry after 182.798735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 10:25:45.392180   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 10:25:45.533401   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:45.642551   12520 main.go:141] libmachine: [stdout =====>] : 172.19.148.231
	
	I0401 10:25:45.642609   12520 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:25:45.642609   12520 sshutil.go:53] new ssh client: &{IP:172.19.148.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0401 10:25:46.000586   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:46.468712   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:46.546436   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:46.759223   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.9747666s)
	I0401 10:25:46.759223   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.8760283s)
	I0401 10:25:46.759223   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.5250388s)
	I0401 10:25:46.759223   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.1385834s)
	I0401 10:25:46.762112   12520 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-852800 service yakd-dashboard -n yakd-dashboard
	
	I0401 10:25:46.759223   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.617797s)
	I0401 10:25:46.760025   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.2406354s)
	I0401 10:25:46.760025   12520 addons.go:470] Verifying addon ingress=true in "addons-852800"
	I0401 10:25:46.767196   12520 out.go:177] * Verifying ingress addon...
	I0401 10:25:46.772298   12520 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0401 10:25:46.802823   12520 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0401 10:25:46.802873   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0401 10:25:46.835771   12520 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0401 10:25:46.946570   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:47.282610   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:47.458399   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:47.789598   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:47.953113   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:48.298524   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:48.452835   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:48.821239   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:48.953459   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:48.981787   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (12.4110136s)
	I0401 10:25:48.981787   12520 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-852800"
	I0401 10:25:48.981787   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.5895822s)
	I0401 10:25:48.986451   12520 out.go:177] * Verifying csi-hostpath-driver addon...
	I0401 10:25:48.981787   12520 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.5354208s)
	I0401 10:25:49.001452   12520 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 10:25:49.000456   12520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0401 10:25:49.006448   12520 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0401 10:25:49.008492   12520 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0401 10:25:49.008492   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0401 10:25:49.027669   12520 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0401 10:25:49.027669   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:49.038732   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:49.097033   12520 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0401 10:25:49.097095   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0401 10:25:49.176580   12520 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 10:25:49.176668   12520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0401 10:25:49.241765   12520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 10:25:49.292326   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:49.443582   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:49.521151   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:49.789968   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:49.945595   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:50.032248   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:50.314821   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:50.555310   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:50.568878   12520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.3270161s)
	I0401 10:25:50.570936   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:50.577065   12520 addons.go:470] Verifying addon gcp-auth=true in "addons-852800"
	I0401 10:25:50.580283   12520 out.go:177] * Verifying gcp-auth addon...
	I0401 10:25:50.587235   12520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0401 10:25:50.620964   12520 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0401 10:25:50.620964   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:50.780772   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:50.954055   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:51.018285   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:51.094539   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:51.286328   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:51.443474   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:51.493301   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:51.520924   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:51.598254   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:52.163834   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:52.165205   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:52.169872   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:52.171857   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:52.279700   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:52.571351   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:52.575254   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:52.593867   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:52.784275   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:52.949431   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:53.025262   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:53.101112   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:53.294048   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:53.453624   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:53.517592   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:53.593402   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:53.786744   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:53.948343   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:53.979312   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:54.010244   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:54.103063   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:54.279475   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:54.454285   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:54.516880   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:54.593601   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:54.783714   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:54.942900   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:55.020419   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:55.098233   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:55.291065   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:55.448191   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:55.511174   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:55.606418   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:55.784808   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:55.956940   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:55.990579   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:56.021331   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:56.095979   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:56.287157   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:56.447960   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:56.514593   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:56.603476   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:56.794575   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:56.952454   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:57.015524   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:57.092686   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:57.284965   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:57.462377   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:57.521151   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:57.599109   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:57.792256   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:57.947375   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:58.010890   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:58.102726   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:58.426644   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:58.446593   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:58.481334   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:25:58.512913   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:58.966143   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:58.966209   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:58.966209   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:59.015648   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:59.107519   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:59.281743   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:59.455796   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:25:59.517239   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:25:59.592861   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:25:59.790874   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:25:59.944370   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:00.024761   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:00.102667   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:00.299673   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:00.455001   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:00.492075   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:26:00.518083   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:00.595578   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:00.786071   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:00.943912   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:01.025095   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:01.141444   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:01.290347   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:01.457792   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:01.526991   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:01.597000   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:01.787655   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:01.945260   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:02.025191   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:02.102326   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:02.291744   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:02.453148   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:02.517535   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:02.592077   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:02.786317   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:02.947229   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:02.979587   12520 pod_ready.go:102] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"False"
	I0401 10:26:03.024282   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:03.099910   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:03.292755   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:03.450681   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:03.513976   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:03.630647   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:03.781179   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:03.953543   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:03.992976   12520 pod_ready.go:92] pod "coredns-76f75df574-zgx2j" in "kube-system" namespace has status "Ready":"True"
	I0401 10:26:03.992976   12520 pod_ready.go:81] duration metric: took 28.5243818s for pod "coredns-76f75df574-zgx2j" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:03.992976   12520 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.000725   12520 pod_ready.go:92] pod "etcd-addons-852800" in "kube-system" namespace has status "Ready":"True"
	I0401 10:26:04.000725   12520 pod_ready.go:81] duration metric: took 7.7489ms for pod "etcd-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.000725   12520 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.009860   12520 pod_ready.go:92] pod "kube-apiserver-addons-852800" in "kube-system" namespace has status "Ready":"True"
	I0401 10:26:04.009860   12520 pod_ready.go:81] duration metric: took 9.135ms for pod "kube-apiserver-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.009860   12520 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.014685   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:04.021791   12520 pod_ready.go:92] pod "kube-controller-manager-addons-852800" in "kube-system" namespace has status "Ready":"True"
	I0401 10:26:04.021791   12520 pod_ready.go:81] duration metric: took 11.9305ms for pod "kube-controller-manager-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.021791   12520 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pfxdl" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.029155   12520 pod_ready.go:92] pod "kube-proxy-pfxdl" in "kube-system" namespace has status "Ready":"True"
	I0401 10:26:04.029354   12520 pod_ready.go:81] duration metric: took 7.5417ms for pod "kube-proxy-pfxdl" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.029354   12520 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.104560   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:04.279653   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:04.389906   12520 pod_ready.go:92] pod "kube-scheduler-addons-852800" in "kube-system" namespace has status "Ready":"True"
	I0401 10:26:04.389906   12520 pod_ready.go:81] duration metric: took 360.5499ms for pod "kube-scheduler-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0401 10:26:04.389906   12520 pod_ready.go:38] duration metric: took 48.098933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 10:26:04.389906   12520 api_server.go:52] waiting for apiserver process to appear ...
	I0401 10:26:04.401489   12520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 10:26:04.437496   12520 api_server.go:72] duration metric: took 53.1877731s to wait for apiserver process to appear ...
	I0401 10:26:04.437496   12520 api_server.go:88] waiting for apiserver healthz status ...
	I0401 10:26:04.437496   12520 api_server.go:253] Checking apiserver healthz at https://172.19.148.231:8443/healthz ...
	I0401 10:26:04.447566   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:04.450996   12520 api_server.go:279] https://172.19.148.231:8443/healthz returned 200:
	ok
	I0401 10:26:04.453083   12520 api_server.go:141] control plane version: v1.29.3
	I0401 10:26:04.453083   12520 api_server.go:131] duration metric: took 15.5867ms to wait for apiserver health ...
	I0401 10:26:04.453083   12520 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 10:26:04.519255   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:04.595730   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:04.604258   12520 system_pods.go:59] 18 kube-system pods found
	I0401 10:26:04.604258   12520 system_pods.go:61] "coredns-76f75df574-zgx2j" [98e3510f-d70d-406d-b05a-c0082a81bfc3] Running
	I0401 10:26:04.604258   12520 system_pods.go:61] "csi-hostpath-attacher-0" [b9c010f8-2faf-4300-be66-a9bd3220c56d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 10:26:04.604258   12520 system_pods.go:61] "csi-hostpath-resizer-0" [eda515fa-d789-4218-ad1f-2854a75c3981] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 10:26:04.604258   12520 system_pods.go:61] "csi-hostpathplugin-rqqvv" [61c56baf-9231-4177-83d9-1c37738b8de9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 10:26:04.604258   12520 system_pods.go:61] "etcd-addons-852800" [3f9c8049-47b1-4d36-b656-df3ec3ab8e87] Running
	I0401 10:26:04.604258   12520 system_pods.go:61] "kube-apiserver-addons-852800" [2f59742f-9391-4ecb-b59a-a20103cc5a75] Running
	I0401 10:26:04.604258   12520 system_pods.go:61] "kube-controller-manager-addons-852800" [f8f62bb6-3157-4749-83ec-2a7cd61f6cdc] Running
	I0401 10:26:04.604258   12520 system_pods.go:61] "kube-ingress-dns-minikube" [3047911a-3a49-41c6-bbd2-b932460aef63] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0401 10:26:04.604258   12520 system_pods.go:61] "kube-proxy-pfxdl" [4e4ffc20-c4ad-482d-98ae-3d55906649fc] Running
	I0401 10:26:04.604258   12520 system_pods.go:61] "kube-scheduler-addons-852800" [ac1aa14b-1115-496e-a1dd-5f6b41150c4e] Running
	I0401 10:26:04.604258   12520 system_pods.go:61] "metrics-server-75d6c48ddd-9j6db" [c4b50a6b-bd8a-4386-97e6-323009bdc6f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 10:26:04.604258   12520 system_pods.go:61] "nvidia-device-plugin-daemonset-lxk7c" [363a7316-182c-4a25-86ec-89e74ef033c5] Running
	I0401 10:26:04.604258   12520 system_pods.go:61] "registry-kg9pg" [b1418252-c10f-4107-b496-1d57938f8905] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0401 10:26:04.604258   12520 system_pods.go:61] "registry-proxy-75p9z" [d1990ac6-2eb1-49cc-9568-fe08ad36d51e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0401 10:26:04.604258   12520 system_pods.go:61] "snapshot-controller-58dbcc7b99-srt9v" [7e11dfe2-9f58-42dc-90b2-6d5a83473169] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 10:26:04.604258   12520 system_pods.go:61] "snapshot-controller-58dbcc7b99-zn488" [4d1eaa7e-429b-4ad8-abf3-6189ed860ce8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 10:26:04.604258   12520 system_pods.go:61] "storage-provisioner" [17aa9a16-d390-442e-9cee-662bbd68bf0a] Running
	I0401 10:26:04.604258   12520 system_pods.go:61] "tiller-deploy-7b677967b9-m88ns" [5136ac7c-84c6-4cd2-9d90-2d625f86676a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0401 10:26:04.604258   12520 system_pods.go:74] duration metric: took 151.174ms to wait for pod list to return data ...
	I0401 10:26:04.604258   12520 default_sa.go:34] waiting for default service account to be created ...
	I0401 10:26:04.784283   12520 default_sa.go:45] found service account: "default"
	I0401 10:26:04.784283   12520 default_sa.go:55] duration metric: took 180.0237ms for default service account to be created ...
	I0401 10:26:04.784283   12520 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 10:26:04.785252   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:04.945905   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:04.999325   12520 system_pods.go:86] 18 kube-system pods found
	I0401 10:26:04.999325   12520 system_pods.go:89] "coredns-76f75df574-zgx2j" [98e3510f-d70d-406d-b05a-c0082a81bfc3] Running
	I0401 10:26:04.999325   12520 system_pods.go:89] "csi-hostpath-attacher-0" [b9c010f8-2faf-4300-be66-a9bd3220c56d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 10:26:04.999325   12520 system_pods.go:89] "csi-hostpath-resizer-0" [eda515fa-d789-4218-ad1f-2854a75c3981] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 10:26:04.999325   12520 system_pods.go:89] "csi-hostpathplugin-rqqvv" [61c56baf-9231-4177-83d9-1c37738b8de9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 10:26:04.999325   12520 system_pods.go:89] "etcd-addons-852800" [3f9c8049-47b1-4d36-b656-df3ec3ab8e87] Running
	I0401 10:26:04.999325   12520 system_pods.go:89] "kube-apiserver-addons-852800" [2f59742f-9391-4ecb-b59a-a20103cc5a75] Running
	I0401 10:26:04.999325   12520 system_pods.go:89] "kube-controller-manager-addons-852800" [f8f62bb6-3157-4749-83ec-2a7cd61f6cdc] Running
	I0401 10:26:04.999325   12520 system_pods.go:89] "kube-ingress-dns-minikube" [3047911a-3a49-41c6-bbd2-b932460aef63] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0401 10:26:04.999325   12520 system_pods.go:89] "kube-proxy-pfxdl" [4e4ffc20-c4ad-482d-98ae-3d55906649fc] Running
	I0401 10:26:04.999325   12520 system_pods.go:89] "kube-scheduler-addons-852800" [ac1aa14b-1115-496e-a1dd-5f6b41150c4e] Running
	I0401 10:26:04.999325   12520 system_pods.go:89] "metrics-server-75d6c48ddd-9j6db" [c4b50a6b-bd8a-4386-97e6-323009bdc6f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 10:26:04.999325   12520 system_pods.go:89] "nvidia-device-plugin-daemonset-lxk7c" [363a7316-182c-4a25-86ec-89e74ef033c5] Running
	I0401 10:26:04.999325   12520 system_pods.go:89] "registry-kg9pg" [b1418252-c10f-4107-b496-1d57938f8905] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0401 10:26:04.999325   12520 system_pods.go:89] "registry-proxy-75p9z" [d1990ac6-2eb1-49cc-9568-fe08ad36d51e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0401 10:26:04.999325   12520 system_pods.go:89] "snapshot-controller-58dbcc7b99-srt9v" [7e11dfe2-9f58-42dc-90b2-6d5a83473169] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 10:26:04.999325   12520 system_pods.go:89] "snapshot-controller-58dbcc7b99-zn488" [4d1eaa7e-429b-4ad8-abf3-6189ed860ce8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 10:26:04.999325   12520 system_pods.go:89] "storage-provisioner" [17aa9a16-d390-442e-9cee-662bbd68bf0a] Running
	I0401 10:26:04.999325   12520 system_pods.go:89] "tiller-deploy-7b677967b9-m88ns" [5136ac7c-84c6-4cd2-9d90-2d625f86676a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0401 10:26:04.999325   12520 system_pods.go:126] duration metric: took 215.0404ms to wait for k8s-apps to be running ...
	I0401 10:26:04.999902   12520 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 10:26:05.011362   12520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 10:26:05.018376   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:05.046141   12520 system_svc.go:56] duration metric: took 46.2382ms WaitForService to wait for kubelet
	I0401 10:26:05.046141   12520 kubeadm.go:576] duration metric: took 53.7964133s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 10:26:05.046208   12520 node_conditions.go:102] verifying NodePressure condition ...
	I0401 10:26:05.100402   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:05.196029   12520 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 10:26:05.196029   12520 node_conditions.go:123] node cpu capacity is 2
	I0401 10:26:05.196029   12520 node_conditions.go:105] duration metric: took 149.8197ms to run NodePressure ...
	I0401 10:26:05.196029   12520 start.go:240] waiting for startup goroutines ...
	I0401 10:26:05.292910   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:05.452198   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:05.516529   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:05.606926   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:05.782760   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:05.952713   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:06.021532   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:06.094186   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:06.284769   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:06.444376   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:06.523828   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:06.600036   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:07.120254   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:07.120470   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:07.121817   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:07.122049   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:07.309055   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:07.732115   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:07.732693   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:07.735147   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:08.015818   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:08.017821   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:08.020512   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:08.107112   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:08.286593   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:08.447348   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:08.528169   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:08.602557   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:08.791146   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:08.947548   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:09.013752   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:09.107868   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:09.279365   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:09.454552   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:09.519043   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:09.594181   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:09.785664   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:09.949140   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:10.010715   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:10.101657   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:10.294302   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:10.453627   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:10.516892   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:10.600041   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:10.786554   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:10.954578   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:11.013230   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:11.102889   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:11.293355   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:11.453352   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:11.515497   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:11.592222   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:11.784264   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:11.941146   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:12.020162   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:12.169086   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:12.517189   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:12.520729   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:12.524978   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:12.596840   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:12.784243   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:12.963164   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:13.026921   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:13.099168   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:13.284598   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:13.444326   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:13.522211   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:13.598747   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:13.790161   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:13.946663   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:14.011508   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:14.101620   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:14.290355   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:14.444894   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:14.523485   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:14.598578   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:14.799834   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:14.949153   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:15.011062   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:15.100901   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:15.292440   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:15.450631   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:15.513064   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:15.606483   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:15.781737   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:15.955168   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:16.017876   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:16.096316   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:16.288798   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:16.448377   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:16.511668   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:16.602821   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:16.930546   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:16.967399   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:17.035422   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:17.109006   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:17.285272   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:17.443761   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:17.524295   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:17.599814   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:17.796401   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:17.950060   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:18.014979   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:18.107233   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:18.282296   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:18.458124   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:18.528262   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:18.595256   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:18.787214   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:18.946132   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:19.025030   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:19.099651   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:19.292970   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:19.452754   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:19.516968   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:19.607369   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:19.782732   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:19.953843   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:20.016402   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:20.092573   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:20.285322   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:20.444647   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:20.523057   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:20.600252   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:20.792122   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:20.947954   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:21.012540   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:21.105212   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:21.281728   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:21.442945   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:21.521798   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:21.598583   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:21.792064   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:21.950678   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:22.013283   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:22.105286   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:22.279837   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:22.454795   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:22.519388   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:22.606910   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:22.781674   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:22.954274   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:23.020963   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:23.101571   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:23.292641   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:23.448721   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:23.510854   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:23.604061   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:23.781937   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:23.954187   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:24.015367   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:24.093324   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:24.289027   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:24.442310   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:24.523741   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:24.598147   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:24.788534   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:24.947783   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:25.012676   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:25.105047   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:25.294435   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:25.452245   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:25.517608   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:25.608540   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:25.781773   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:25.956950   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:26.017831   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:26.094559   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:26.294498   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:26.450517   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:26.513514   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:26.604325   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:26.793793   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:26.954603   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:27.018785   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:27.094112   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:27.285788   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:27.444870   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:27.522681   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:27.598206   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:27.788893   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:27.948471   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:28.015241   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:28.103227   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:28.295918   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:28.453884   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:28.517278   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:28.593759   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:28.784278   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:28.944495   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:29.027920   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:29.098393   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:29.291561   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:29.450184   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:29.514317   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:29.605463   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:29.780332   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:29.954231   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:30.016663   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:30.108330   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:30.284951   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:30.444357   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:30.521918   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:30.598087   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:30.789354   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:30.946627   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:31.010028   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:31.100698   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:31.296255   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:31.452059   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:31.519483   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:31.607711   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:31.781690   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:31.955112   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:32.016620   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:32.165425   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:32.615390   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:32.633883   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:32.634445   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:32.634936   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:32.820928   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:32.955876   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:33.020019   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:33.094619   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:33.286361   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:33.446398   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:33.510116   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:33.601791   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:33.795213   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:33.954254   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:34.019081   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:34.109025   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:34.286451   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:34.444566   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:34.529442   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:34.599907   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:34.791742   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:34.947488   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:35.011144   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:35.101332   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:35.292547   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:35.451620   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:35.518313   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:35.607366   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:35.783575   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:35.957406   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:36.020198   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:36.220256   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:36.286936   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:36.457317   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:36.518968   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:37.372139   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:37.375338   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:37.376114   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:37.377279   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:37.419522   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:38.216695   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:38.216695   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:38.217549   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:38.230829   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:38.234457   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:38.235261   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:38.242788   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:38.295402   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:38.452409   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:38.524133   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:38.593507   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:38.791431   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:38.950857   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:39.030796   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:39.103681   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:39.342707   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:39.453436   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:39.515652   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:39.606052   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:39.780974   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:39.957275   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:40.016894   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:40.107203   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:40.297489   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:40.444367   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:40.523782   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:40.599715   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:40.792522   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:40.952227   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:41.013653   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:41.103996   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:41.294394   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:41.458537   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:41.517345   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:41.607480   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:41.781537   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:41.957239   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:42.020151   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:42.095270   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:42.289127   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:42.446112   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:42.524729   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:42.603276   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:42.781599   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:42.960647   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:43.020606   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:43.097040   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:43.287683   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:43.448393   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:43.511949   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:43.603247   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:43.780724   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:43.958045   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:44.021084   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:44.099919   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:44.286718   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:44.446761   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:44.526338   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:44.599587   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:44.792496   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:44.953926   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:45.019979   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:45.106030   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:45.281132   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:45.442895   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:45.521465   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:45.596770   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:45.786086   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:45.945932   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:46.024945   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:46.100282   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:46.279983   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:46.453184   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:46.517491   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:46.607546   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:46.784986   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:46.944649   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:47.022642   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:47.099836   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:47.290973   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:47.447477   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:47.513852   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:47.604221   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:47.779189   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:47.955006   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:48.020307   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:48.095621   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:48.287343   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:48.446177   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:48.510759   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:48.602915   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:48.793870   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:48.955366   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:49.019455   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:49.094613   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:49.287500   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:49.444888   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:49.527278   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:49.602130   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:49.794313   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:49.956798   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:50.016777   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:50.107819   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:50.295219   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:51.540535   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:51.541255   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:51.543793   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:51.545986   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:51.725065   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:51.726057   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:51.729178   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:51.729501   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:51.733319   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:51.794766   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:51.962002   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:52.023433   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:52.097239   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:52.289311   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:52.444931   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:26:52.523450   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:52.606953   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:52.793488   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:52.953206   12520 kapi.go:107] duration metric: took 1m12.5184599s to wait for kubernetes.io/minikube-addons=registry ...
	I0401 10:26:53.016689   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:53.093306   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:53.285337   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:53.539336   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:53.601450   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:53.795651   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:54.020268   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:54.092811   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:54.283207   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:54.519638   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:54.607614   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:54.780901   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:55.021866   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:55.095134   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:55.284883   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:55.525554   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:55.602599   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:55.794194   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:56.017736   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:56.093108   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:56.285157   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:56.783471   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:56.784659   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:56.787336   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:57.016919   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:57.106948   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:57.284075   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:57.517396   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:57.604234   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:57.830678   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:58.030475   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:58.110810   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:58.281769   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:58.520138   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:58.598083   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:58.792825   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:59.016299   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:59.104955   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:59.282377   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:26:59.521462   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:26:59.594449   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:26:59.795564   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:00.020805   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:00.096040   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:00.288596   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:00.513152   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:00.605623   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:00.795287   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:01.015918   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:01.107135   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:01.286706   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:01.523297   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:01.600088   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:01.793727   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:02.018365   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:02.093073   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:02.283278   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:02.524518   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:02.599033   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:02.798432   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:03.016063   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:03.104663   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:03.281721   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:03.520777   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:03.594661   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:03.786938   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:04.027359   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:04.100450   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:04.295494   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:04.520639   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:04.594028   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:04.790543   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:05.024614   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:05.103842   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:05.296698   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:05.947223   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:05.950925   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:05.955750   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:06.017332   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:06.112610   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:06.281491   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:06.517180   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:06.607460   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:06.931467   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:07.018161   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:07.106668   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:07.282517   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:07.520230   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:07.594210   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:07.822988   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:08.021793   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:08.107011   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:08.291328   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:08.514484   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:08.604556   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:08.780254   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:09.019928   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:09.097884   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:09.285617   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:09.535243   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:09.602238   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:09.807707   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:10.018759   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:10.106202   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:10.284109   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:10.521520   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:10.596848   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:10.787873   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:11.013471   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:11.104536   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:11.281938   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:11.525447   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:11.596606   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:11.923225   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:12.210527   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:12.213628   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:12.294068   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:12.518803   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:12.593405   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:12.787355   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:13.025666   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:13.101117   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:13.290941   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:13.515017   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:13.607809   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:13.785933   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:14.026123   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:14.098923   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:14.293443   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:14.516770   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:14.606053   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:14.784560   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:15.024681   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:15.098338   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:15.288782   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:15.525808   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:15.600764   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:15.792902   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:16.016408   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:16.105682   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:16.281685   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:16.522071   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:16.593882   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:16.925778   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:17.260050   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:17.260853   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:17.281301   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:17.519922   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:17.596567   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:17.784243   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:18.020424   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:18.108646   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:18.281789   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:18.515736   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:18.673867   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:18.780353   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:19.021515   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:19.105517   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:19.287354   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:19.528357   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:19.601864   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:19.799746   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:20.019518   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:20.107518   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:20.284507   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:20.524694   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:20.600945   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:20.793935   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:21.013544   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:21.105011   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:21.282544   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:21.791339   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:21.792363   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:21.793611   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:22.018541   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:22.096211   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:22.285933   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:22.527383   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:22.598245   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:22.790834   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:23.016278   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:23.107374   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:23.285021   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:23.524694   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:23.600576   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:23.792751   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:24.028824   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:24.108688   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:24.279379   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:24.524912   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:24.599934   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:24.789210   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:25.011222   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:25.103343   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:25.280391   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:25.730100   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:25.731392   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:25.924571   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:26.039032   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:26.095882   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:26.285265   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:26.525785   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:26.607954   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:26.794370   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:27.018189   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:27.106212   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:27.283354   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:27.524848   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:27.598993   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:27.787757   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:28.012787   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:28.102937   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:28.292629   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:28.515714   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:28.607710   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:28.782873   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:29.025809   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:29.099365   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:29.289797   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:29.515108   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:29.921723   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:29.922477   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:30.165514   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:30.166083   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:30.290371   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:30.523436   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:30.604492   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:30.795330   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:31.021650   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:31.098872   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:31.291587   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:31.521515   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:31.603532   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:31.796261   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:32.019356   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:32.096325   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:32.288113   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:32.513208   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:32.600677   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:32.817111   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:33.013074   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:33.104558   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:33.280811   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:33.524002   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:33.595924   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:33.789037   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:34.018943   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:34.105510   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:34.282315   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:34.525026   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:34.594863   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:34.786436   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:35.029112   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:35.101405   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:35.292881   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:35.516365   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:35.606209   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:35.782093   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:36.026452   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:36.101333   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:36.292200   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:36.936319   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:36.938599   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:36.941180   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:37.030239   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:37.107122   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:37.290694   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:37.518887   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:37.607562   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:37.782678   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:38.022442   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:38.096101   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:38.290727   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:38.515925   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:38.605356   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:38.782396   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:39.022717   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:39.098743   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:39.293599   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:39.516014   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:39.607962   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:40.224072   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:40.228072   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:40.228072   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:40.423245   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:40.541605   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:40.600599   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:40.788858   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:41.015056   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:41.108641   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:41.281491   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:41.527661   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:41.612432   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:41.802031   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:42.019004   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:42.110757   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:42.285797   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:42.524233   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:42.600488   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:42.793183   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:43.017419   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:43.107588   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:43.282267   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:43.521407   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:43.595751   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:43.789609   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:44.012306   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:44.118118   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:44.434060   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:44.518078   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:44.608383   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:44.784452   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:45.025443   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:45.104011   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:45.290864   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:45.526748   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:45.600490   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:45.793259   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:46.018584   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:46.113089   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:46.281556   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:46.524313   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:46.598576   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:46.789527   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:47.012362   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:47.103115   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:47.296040   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:47.521073   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:47.595315   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:47.785211   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:48.019671   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:48.097132   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:48.287598   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:48.525538   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:48.598304   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:48.789307   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:49.009084   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:49.104091   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:49.293622   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:49.516531   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:49.608635   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:49.784541   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:50.194094   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:50.197108   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:50.286962   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:50.525255   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:50.601695   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:50.793048   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:51.014849   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:51.106437   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:51.281182   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:51.515462   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:51.608981   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:51.781431   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:52.023784   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:52.106748   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:52.294491   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:52.519187   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:52.604278   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:52.782026   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:53.019671   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:53.107541   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:53.279744   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:53.516627   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:53.607433   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:53.781581   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:54.022459   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:54.094607   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:54.284851   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:54.521896   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:54.604128   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:54.789396   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:55.019236   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:55.119485   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:55.280807   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:55.521082   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:55.594479   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:55.783596   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:56.022039   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:56.097241   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:56.288538   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:56.513951   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:56.604521   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:56.785848   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:57.275563   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:57.280502   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:57.283967   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:57.524263   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:57.601362   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:57.787452   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:58.027393   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:58.100170   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:58.291598   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:58.526334   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:58.601124   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:58.791162   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:59.014470   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:59.107592   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:59.283517   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:27:59.522568   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:27:59.596648   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:27:59.789449   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:00.016129   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:00.107555   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:00.284021   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:00.525779   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:01.191421   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:01.191580   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:01.192691   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:01.199368   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:01.299381   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:01.525505   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:01.601969   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:01.792275   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:02.015268   12520 kapi.go:107] duration metric: took 2m13.0138711s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0401 10:28:02.110136   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:02.282586   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:02.609599   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:02.785874   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:03.098443   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:03.287610   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:03.600647   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:03.790722   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:04.105828   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:04.294756   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:04.595715   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:04.788893   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:05.106512   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:05.283556   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:05.596819   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:05.788848   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:06.104745   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:06.281476   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:06.598177   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:06.789233   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:07.221154   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:07.566285   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:08.075696   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:08.077317   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:08.103293   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:08.470478   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:08.607570   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:08.781397   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:09.093699   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:09.289204   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:09.602412   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:09.779844   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:10.098179   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:10.290724   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:10.607309   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:10.780712   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:11.097699   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:11.289793   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:11.608093   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:11.783534   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:12.099333   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:12.291740   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:12.604420   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:12.782445   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:13.095371   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:13.285925   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:13.603002   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:13.793933   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:14.122390   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:14.287439   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:14.602707   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:14.795228   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:15.096350   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:15.287747   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:15.603447   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:15.780969   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:16.095525   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:16.286675   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:16.602081   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:16.792239   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:17.109622   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:17.281119   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:17.593876   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:17.918335   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:18.485362   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:18.485915   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:18.601653   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:18.790128   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:19.104909   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:19.284091   12520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:19.596968   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:19.788966   12520 kapi.go:107] duration metric: took 2m33.0155051s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0401 10:28:20.105946   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:20.597261   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:21.418217   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:21.608306   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:22.107562   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:22.606784   12520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:23.096717   12520 kapi.go:107] duration metric: took 2m32.5084023s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0401 10:28:23.099222   12520 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-852800 cluster.
	I0401 10:28:23.101271   12520 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0401 10:28:23.104487   12520 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0401 10:28:23.107410   12520 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, helm-tiller, storage-provisioner, metrics-server, ingress-dns, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0401 10:28:23.110213   12520 addons.go:505] duration metric: took 3m11.8609101s for enable addons: enabled=[nvidia-device-plugin cloud-spanner helm-tiller storage-provisioner metrics-server ingress-dns inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0401 10:28:23.110213   12520 start.go:245] waiting for cluster config update ...
	I0401 10:28:23.110213   12520 start.go:254] writing updated cluster config ...
	I0401 10:28:23.125591   12520 ssh_runner.go:195] Run: rm -f paused
	I0401 10:28:23.340335   12520 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 10:28:23.347569   12520 out.go:177] * Done! kubectl is now configured to use "addons-852800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 01 10:29:14 addons-852800 dockerd[1350]: time="2024-04-01T10:29:14.572524453Z" level=info msg="shim disconnected" id=9957ca7a2b5ba5bb749abbff7b9d5b36cda8860ae7616fa7e77038a835aba118 namespace=moby
	Apr 01 10:29:14 addons-852800 dockerd[1350]: time="2024-04-01T10:29:14.572670649Z" level=warning msg="cleaning up after shim disconnected" id=9957ca7a2b5ba5bb749abbff7b9d5b36cda8860ae7616fa7e77038a835aba118 namespace=moby
	Apr 01 10:29:14 addons-852800 dockerd[1350]: time="2024-04-01T10:29:14.572684549Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:29:14 addons-852800 dockerd[1350]: time="2024-04-01T10:29:14.592483683Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:29:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:29:14 addons-852800 dockerd[1350]: time="2024-04-01T10:29:14.861366453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:29:14 addons-852800 dockerd[1350]: time="2024-04-01T10:29:14.861677945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:29:14 addons-852800 dockerd[1350]: time="2024-04-01T10:29:14.861780343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:29:14 addons-852800 dockerd[1350]: time="2024-04-01T10:29:14.862224632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:29:15 addons-852800 cri-dockerd[1234]: time="2024-04-01T10:29:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/553ddf8f98f846afa628c11d64acc8caefdcf28e4e7fe3ae3fb4eec5563d43ca/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 01 10:29:15 addons-852800 dockerd[1350]: time="2024-04-01T10:29:15.574865625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:29:15 addons-852800 dockerd[1350]: time="2024-04-01T10:29:15.574931025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:29:15 addons-852800 dockerd[1350]: time="2024-04-01T10:29:15.574944325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:29:15 addons-852800 dockerd[1350]: time="2024-04-01T10:29:15.575055125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:29:15 addons-852800 cri-dockerd[1234]: time="2024-04-01T10:29:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d5824114483e2e44604c628e99a8947e18279334ffdeab07a9c38f619c31ae3/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 01 10:29:18 addons-852800 cri-dockerd[1234]: time="2024-04-01T10:29:18Z" level=info msg="Stop pulling image docker.io/alpine/helm:2.16.3: Status: Downloaded newer image for alpine/helm:2.16.3"
	Apr 01 10:29:18 addons-852800 dockerd[1342]: time="2024-04-01T10:29:18.650403997Z" level=warning msg="reference for unknown type: " digest="sha256:dd9e2ad6ae6d23761372bc9cc0dbcb47aacd6a31986827b43ac207cecb25c39f" remote="ghcr.io/headlamp-k8s/headlamp@sha256:dd9e2ad6ae6d23761372bc9cc0dbcb47aacd6a31986827b43ac207cecb25c39f" spanID=604eafc41a1a5aa7 traceID=0b731c319e03cf9ec36f0b10cca41b90
	Apr 01 10:29:19 addons-852800 dockerd[1350]: time="2024-04-01T10:29:19.271392355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:29:19 addons-852800 dockerd[1350]: time="2024-04-01T10:29:19.272140873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:29:19 addons-852800 dockerd[1350]: time="2024-04-01T10:29:19.272362878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:29:19 addons-852800 dockerd[1350]: time="2024-04-01T10:29:19.272899991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:29:19 addons-852800 dockerd[1342]: time="2024-04-01T10:29:19.812549189Z" level=info msg="ignoring event" container=bfdc9ce58c8bbc613545477b1b168e126811f72a136e336b3bdffd64cbb73ec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:29:19 addons-852800 dockerd[1350]: time="2024-04-01T10:29:19.814012424Z" level=info msg="shim disconnected" id=bfdc9ce58c8bbc613545477b1b168e126811f72a136e336b3bdffd64cbb73ec3 namespace=moby
	Apr 01 10:29:19 addons-852800 dockerd[1350]: time="2024-04-01T10:29:19.814679840Z" level=warning msg="cleaning up after shim disconnected" id=bfdc9ce58c8bbc613545477b1b168e126811f72a136e336b3bdffd64cbb73ec3 namespace=moby
	Apr 01 10:29:19 addons-852800 dockerd[1350]: time="2024-04-01T10:29:19.814764042Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:29:19 addons-852800 dockerd[1342]: time="2024-04-01T10:29:19.848746954Z" level=warning msg="failed to close stdin: task bfdc9ce58c8bbc613545477b1b168e126811f72a136e336b3bdffd64cbb73ec3 not found: not found"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	bfdc9ce58c8bb       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                          4 seconds ago        Exited              helm-test                    0                   553ddf8f98f84       helm-test
	d4227f7311073       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff            49 seconds ago       Exited              gadget                       3                   546659d5f2046       gadget-7w8sh
	9f60d9569565d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 About a minute ago   Running             gcp-auth                     0                   4d2ac4559d3fc       gcp-auth-7d69788767-bbf2k
	15a5c73b6daf9       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c             About a minute ago   Running             controller                   0                   0ff9e80dd1d5e       ingress-nginx-controller-65496f9567-sbm9d
	f68acc09b652c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   About a minute ago   Exited              patch                        0                   f47f5ba049e49       ingress-nginx-admission-patch-8r25b
	ac755675cfdc1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   About a minute ago   Exited              create                       0                   5a16836ed815a       ingress-nginx-admission-create-wnt5t
	4172bb75901c5       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      About a minute ago   Running             volume-snapshot-controller   0                   b87a1e3f50de4       snapshot-controller-58dbcc7b99-srt9v
	03e3140682003       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      About a minute ago   Running             volume-snapshot-controller   0                   d0ab941ae619c       snapshot-controller-58dbcc7b99-zn488
	cca6dd134ff3f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner       0                   d32f795b941ca       local-path-provisioner-78b46b4d5c-79jpr
	9724a2697988f       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        2 minutes ago        Running             yakd                         0                   fa7b8cfe4ca3c       yakd-dashboard-9947fc6bf-xptlz
	a38996ae24c03       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f             2 minutes ago        Running             minikube-ingress-dns         0                   b8dad55647570       kube-ingress-dns-minikube
	16dec8e908d57       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  3 minutes ago        Running             tiller                       0                   aa4c84740d1f9       tiller-deploy-7b677967b9-m88ns
	c02744df95e50       nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2                     3 minutes ago        Running             nvidia-device-plugin-ctr     0                   27dbb2eeea129       nvidia-device-plugin-daemonset-lxk7c
	7da37f06e9026       6e38f40d628db                                                                                                                3 minutes ago        Running             storage-provisioner          0                   698f4ce948113       storage-provisioner
	2cbcbae89ce1f       a1d263b5dc5b0                                                                                                                4 minutes ago        Running             kube-proxy                   0                   970f09df646a1       kube-proxy-pfxdl
	291dc37460347       cbb01a7bd410d                                                                                                                4 minutes ago        Running             coredns                      0                   dac1021b7b825       coredns-76f75df574-zgx2j
	76530ebb9c31a       8c390d98f50c0                                                                                                                4 minutes ago        Running             kube-scheduler               0                   964fc127d6821       kube-scheduler-addons-852800
	f1ea8a8ef80f7       3861cfcd7c04c                                                                                                                4 minutes ago        Running             etcd                         0                   58de7063d6cd5       etcd-addons-852800
	a7d5dfa5b6172       6052a25da3f97                                                                                                                4 minutes ago        Running             kube-controller-manager      0                   1ce0c6684237b       kube-controller-manager-addons-852800
	640dd0307baa1       39f995c9f1996                                                                                                                4 minutes ago        Running             kube-apiserver               0                   635adce9f931d       kube-apiserver-addons-852800
	
	
	==> controller_ingress [15a5c73b6daf] <==
	  Build:         71f78d49f0a496c31d4c19f095469f3f23900f8a
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	W0401 10:28:18.880919       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0401 10:28:18.881446       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0401 10:28:18.890409       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="29" git="v1.29.3" state="clean" commit="6813625b7cd706db5bc7388921be03071e1a492d" platform="linux/amd64"
	I0401 10:28:19.074994       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0401 10:28:19.112858       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0401 10:28:19.132923       7 nginx.go:265] "Starting NGINX Ingress controller"
	I0401 10:28:19.150478       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f9d2f021-9a05-4613-aca3-46119d053fdd", APIVersion:"v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0401 10:28:19.159472       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"e77954ed-375c-4236-80a0-3e0c57c33bf1", APIVersion:"v1", ResourceVersion:"719", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0401 10:28:19.160846       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"0ff81e3c-f477-4a79-9410-52c817f032f7", APIVersion:"v1", ResourceVersion:"720", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0401 10:28:20.335975       7 nginx.go:308] "Starting NGINX process"
	I0401 10:28:20.336385       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0401 10:28:20.337045       7 nginx.go:328] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0401 10:28:20.337444       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0401 10:28:20.366311       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0401 10:28:20.366423       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-65496f9567-sbm9d"
	I0401 10:28:20.371944       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-65496f9567-sbm9d" node="addons-852800"
	I0401 10:28:20.423466       7 controller.go:210] "Backend successfully reloaded"
	I0401 10:28:20.423690       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0401 10:28:20.425087       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-65496f9567-sbm9d", UID:"71ce1476-c24c-4a83-91a2-559aa6babef0", APIVersion:"v1", ResourceVersion:"744", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [291dc3746034] <==
	[INFO] 10.244.0.8:40640 - 55429 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001204s
	[INFO] 10.244.0.8:50357 - 41856 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002167s
	[INFO] 10.244.0.8:50357 - 5004 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000200901s
	[INFO] 10.244.0.8:57510 - 49996 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0002018s
	[INFO] 10.244.0.8:57510 - 9538 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000168001s
	[INFO] 10.244.0.8:52866 - 64017 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000559901s
	[INFO] 10.244.0.8:52866 - 20499 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000586301s
	[INFO] 10.244.0.8:58739 - 38050 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000219s
	[INFO] 10.244.0.8:58739 - 21152 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000706s
	[INFO] 10.244.0.8:47524 - 62502 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002644s
	[INFO] 10.244.0.8:47524 - 20004 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0004659s
	[INFO] 10.244.0.8:38242 - 30756 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000466s
	[INFO] 10.244.0.8:38242 - 38434 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001608s
	[INFO] 10.244.0.8:59091 - 50763 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001087s
	[INFO] 10.244.0.8:59091 - 44105 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000418201s
	[INFO] 10.244.0.22:37438 - 63053 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000308498s
	[INFO] 10.244.0.22:60076 - 65055 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170299s
	[INFO] 10.244.0.22:60998 - 30995 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001007s
	[INFO] 10.244.0.22:49154 - 16300 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127499s
	[INFO] 10.244.0.22:56526 - 31495 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001013s
	[INFO] 10.244.0.22:54763 - 56597 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000248999s
	[INFO] 10.244.0.22:45990 - 54889 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.00366198s
	[INFO] 10.244.0.22:35720 - 56481 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.003799579s
	[INFO] 10.244.0.26:38574 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000466597s
	[INFO] 10.244.0.26:36808 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000733395s
	
	
	==> describe nodes <==
	Name:               addons-852800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-852800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=addons-852800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T10_24_58_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-852800
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 10:24:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-852800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 10:29:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 10:29:06 +0000   Mon, 01 Apr 2024 10:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 10:29:06 +0000   Mon, 01 Apr 2024 10:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 10:29:06 +0000   Mon, 01 Apr 2024 10:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 10:29:06 +0000   Mon, 01 Apr 2024 10:25:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.148.231
	  Hostname:    addons-852800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ec2c5a30d57402fb7a176b78a9d6c53
	  System UUID:                ef2dc86e-e1a9-f94a-976a-2bc131ed6236
	  Boot ID:                    ed0ec3db-04ff-4e93-9b1f-bd3e9ff28442
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-7w8sh                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  gcp-auth                    gcp-auth-7d69788767-bbf2k                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  headlamp                    headlamp-5b77dbd7c4-9jr89                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  ingress-nginx               ingress-nginx-controller-65496f9567-sbm9d    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m37s
	  kube-system                 coredns-76f75df574-zgx2j                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m12s
	  kube-system                 etcd-addons-852800                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-apiserver-addons-852800                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-controller-manager-addons-852800        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-proxy-pfxdl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 kube-scheduler-addons-852800                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 nvidia-device-plugin-daemonset-lxk7c         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 snapshot-controller-58dbcc7b99-srt9v         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 snapshot-controller-58dbcc7b99-zn488         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 tiller-deploy-7b677967b9-m88ns               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  local-path-storage          local-path-provisioner-78b46b4d5c-79jpr      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-xptlz               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m58s                  kube-proxy       
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node addons-852800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node addons-852800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x7 over 4m34s)  kubelet          Node addons-852800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s                  kubelet          Node addons-852800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s                  kubelet          Node addons-852800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s                  kubelet          Node addons-852800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m23s                  kubelet          Node addons-852800 status is now: NodeReady
	  Normal  RegisteredNode           4m13s                  node-controller  Node addons-852800 event: Registered Node addons-852800 in Controller
	
	
	==> dmesg <==
	[  +6.513605] kauditd_printk_skb: 15 callbacks suppressed
	[  +9.105866] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.962280] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.401133] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.076224] kauditd_printk_skb: 85 callbacks suppressed
	[  +8.979802] kauditd_printk_skb: 17 callbacks suppressed
	[Apr 1 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[ +24.755343] kauditd_printk_skb: 1 callbacks suppressed
	[Apr 1 10:27] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.403899] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.716620] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.023708] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.103582] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.338985] kauditd_printk_skb: 2 callbacks suppressed
	[Apr 1 10:28] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.198129] kauditd_printk_skb: 8 callbacks suppressed
	[ +15.752375] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.009482] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.012438] kauditd_printk_skb: 58 callbacks suppressed
	[  +9.234374] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.015034] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.909973] kauditd_printk_skb: 1 callbacks suppressed
	[Apr 1 10:29] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.141221] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.792337] kauditd_printk_skb: 37 callbacks suppressed
	
	
	==> etcd [f1ea8a8ef80f] <==
	{"level":"info","ts":"2024-04-01T10:28:17.912453Z","caller":"traceutil/trace.go:171","msg":"trace[1902201965] linearizableReadLoop","detail":"{readStateIndex:1331; appliedIndex:1330; }","duration":"129.012365ms","start":"2024-04-01T10:28:17.783406Z","end":"2024-04-01T10:28:17.912418Z","steps":["trace[1902201965] 'read index received'  (duration: 128.708166ms)","trace[1902201965] 'applied index is now lower than readState.Index'  (duration: 303.699µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T10:28:17.91288Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.494761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14521"}
	{"level":"info","ts":"2024-04-01T10:28:17.912917Z","caller":"traceutil/trace.go:171","msg":"trace[1309512284] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1268; }","duration":"129.609861ms","start":"2024-04-01T10:28:17.783297Z","end":"2024-04-01T10:28:17.912907Z","steps":["trace[1309512284] 'agreement among raft nodes before linearized reading'  (duration: 129.221163ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T10:28:17.913436Z","caller":"traceutil/trace.go:171","msg":"trace[659819720] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"268.245871ms","start":"2024-04-01T10:28:17.64518Z","end":"2024-04-01T10:28:17.913426Z","steps":["trace[659819720] 'process raft request'  (duration: 266.995078ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T10:28:18.480064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.993411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-04-01T10:28:18.480143Z","caller":"traceutil/trace.go:171","msg":"trace[645829598] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1268; }","duration":"385.14681ms","start":"2024-04-01T10:28:18.094982Z","end":"2024-04-01T10:28:18.480129Z","steps":["trace[645829598] 'range keys from in-memory index tree'  (duration: 384.724612ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T10:28:18.480172Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T10:28:18.09485Z","time spent":"385.314409ms","remote":"127.0.0.1:52496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-04-01T10:28:18.480292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.937786ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14521"}
	{"level":"info","ts":"2024-04-01T10:28:18.480353Z","caller":"traceutil/trace.go:171","msg":"trace[1736251879] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1268; }","duration":"196.217684ms","start":"2024-04-01T10:28:18.284125Z","end":"2024-04-01T10:28:18.480343Z","steps":["trace[1736251879] 'range keys from in-memory index tree'  (duration: 195.716187ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T10:28:18.480643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"383.52172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-04-01T10:28:18.480701Z","caller":"traceutil/trace.go:171","msg":"trace[954307002] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1268; }","duration":"383.589119ms","start":"2024-04-01T10:28:18.097099Z","end":"2024-04-01T10:28:18.480688Z","steps":["trace[954307002] 'range keys from in-memory index tree'  (duration: 383.45992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T10:28:18.480741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T10:28:18.097092Z","time spent":"383.641419ms","remote":"127.0.0.1:52410","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4390,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-04-01T10:28:21.415043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.932939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-04-01T10:28:21.415171Z","caller":"traceutil/trace.go:171","msg":"trace[63414015] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1287; }","duration":"312.112738ms","start":"2024-04-01T10:28:21.103044Z","end":"2024-04-01T10:28:21.415156Z","steps":["trace[63414015] 'range keys from in-memory index tree'  (duration: 311.76084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T10:28:21.415536Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T10:28:21.103009Z","time spent":"312.188138ms","remote":"127.0.0.1:52410","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4390,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-04-01T10:29:18.962199Z","caller":"traceutil/trace.go:171","msg":"trace[1047995561] linearizableReadLoop","detail":"{readStateIndex:1782; appliedIndex:1781; }","duration":"215.961171ms","start":"2024-04-01T10:29:18.746221Z","end":"2024-04-01T10:29:18.962182Z","steps":["trace[1047995561] 'read index received'  (duration: 215.804767ms)","trace[1047995561] 'applied index is now lower than readState.Index'  (duration: 155.804µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T10:29:18.963171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.104206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/csi-hostpath-resizer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T10:29:18.963244Z","caller":"traceutil/trace.go:171","msg":"trace[169378311] range","detail":"{range_begin:/registry/services/specs/kube-system/csi-hostpath-resizer; range_end:; response_count:0; response_revision:1696; }","duration":"163.186708ms","start":"2024-04-01T10:29:18.800044Z","end":"2024-04-01T10:29:18.963231Z","steps":["trace[169378311] 'agreement among raft nodes before linearized reading'  (duration: 163.075805ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T10:29:18.96348Z","caller":"traceutil/trace.go:171","msg":"trace[723639992] transaction","detail":"{read_only:false; response_revision:1696; number_of_response:1; }","duration":"251.688927ms","start":"2024-04-01T10:29:18.711779Z","end":"2024-04-01T10:29:18.963467Z","steps":["trace[723639992] 'process raft request'  (duration: 250.298293ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T10:29:18.963652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.424607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T10:29:18.96579Z","caller":"traceutil/trace.go:171","msg":"trace[162775806] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:1696; }","duration":"219.585358ms","start":"2024-04-01T10:29:18.746195Z","end":"2024-04-01T10:29:18.965781Z","steps":["trace[162775806] 'agreement among raft nodes before linearized reading'  (duration: 217.426506ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T10:29:18.966163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.191829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-04-01T10:29:18.966221Z","caller":"traceutil/trace.go:171","msg":"trace[292370680] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1696; }","duration":"143.27203ms","start":"2024-04-01T10:29:18.822941Z","end":"2024-04-01T10:29:18.966213Z","steps":["trace[292370680] 'agreement among raft nodes before linearized reading'  (duration: 143.171028ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T10:29:23.35683Z","caller":"traceutil/trace.go:171","msg":"trace[1189428158] transaction","detail":"{read_only:false; response_revision:1706; number_of_response:1; }","duration":"354.483404ms","start":"2024-04-01T10:29:23.002327Z","end":"2024-04-01T10:29:23.35681Z","steps":["trace[1189428158] 'process raft request'  (duration: 354.336704ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T10:29:23.357058Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T10:29:23.002314Z","time spent":"354.638204ms","remote":"127.0.0.1:52382","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1704 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [9f60d9569565] <==
	2024/04/01 10:28:21 GCP Auth Webhook started!
	2024/04/01 10:28:24 Ready to marshal response ...
	2024/04/01 10:28:24 Ready to write response ...
	2024/04/01 10:28:24 Ready to marshal response ...
	2024/04/01 10:28:24 Ready to write response ...
	2024/04/01 10:28:25 Ready to marshal response ...
	2024/04/01 10:28:25 Ready to write response ...
	2024/04/01 10:28:33 Ready to marshal response ...
	2024/04/01 10:28:33 Ready to write response ...
	2024/04/01 10:28:44 Ready to marshal response ...
	2024/04/01 10:28:44 Ready to write response ...
	2024/04/01 10:28:47 Ready to marshal response ...
	2024/04/01 10:28:47 Ready to write response ...
	2024/04/01 10:29:13 Ready to marshal response ...
	2024/04/01 10:29:13 Ready to write response ...
	2024/04/01 10:29:14 Ready to marshal response ...
	2024/04/01 10:29:14 Ready to write response ...
	2024/04/01 10:29:14 Ready to marshal response ...
	2024/04/01 10:29:14 Ready to write response ...
	2024/04/01 10:29:14 Ready to marshal response ...
	2024/04/01 10:29:14 Ready to write response ...
	2024/04/01 10:29:24 Ready to marshal response ...
	2024/04/01 10:29:24 Ready to write response ...
	
	
	==> kernel <==
	 10:29:24 up 6 min,  0 users,  load average: 6.00, 3.24, 1.40
	Linux addons-852800 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [640dd0307baa] <==
	E0401 10:27:02.785325       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.131.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.131.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.131.22:443: connect: connection refused
	W0401 10:27:02.785419       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 10:27:02.785473       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0401 10:27:02.787681       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.131.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.131.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.131.22:443: connect: connection refused
	E0401 10:27:02.791238       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.131.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.131.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.131.22:443: connect: connection refused
	I0401 10:27:02.908249       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0401 10:27:17.255184       1 trace.go:236] Trace[408781654]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.148.231,type:*v1.Endpoints,resource:apiServerIPInfo (01-Apr-2024 10:27:16.731) (total time: 523ms):
	Trace[408781654]: ---"Transaction prepared" 192ms (10:27:16.924)
	Trace[408781654]: ---"Txn call completed" 329ms (10:27:17.254)
	Trace[408781654]: [523.054237ms] [523.054237ms] END
	I0401 10:27:36.937389       1 trace.go:236] Trace[1787949365]: "Update" accept:application/json, */*,audit-id:e8c258b5-2e4c-410a-9a91-d5d23d5d2ce1,client:10.244.0.12,api-group:coordination.k8s.io,api-version:v1,name:snapshot-controller-leader,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/snapshot-controller-leader,user-agent:snapshot-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (01-Apr-2024 10:27:36.435) (total time: 502ms):
	Trace[1787949365]: ["GuaranteedUpdate etcd3" audit-id:e8c258b5-2e4c-410a-9a91-d5d23d5d2ce1,key:/leases/kube-system/snapshot-controller-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 501ms (10:27:36.435)
	Trace[1787949365]:  ---"Txn call completed" 500ms (10:27:36.936)]
	Trace[1787949365]: [502.107976ms] [502.107976ms] END
	I0401 10:28:01.178926       1 trace.go:236] Trace[424169984]: "Get" accept:application/json, */*,audit-id:bafeb52d-38ff-49e5-915d-043a2b50e1bd,client:172.19.148.231,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (01-Apr-2024 10:28:00.623) (total time: 555ms):
	Trace[424169984]: ---"About to write a response" 555ms (10:28:01.178)
	Trace[424169984]: [555.514953ms] [555.514953ms] END
	I0401 10:28:01.192090       1 trace.go:236] Trace[342156194]: "List" accept:application/json, */*,audit-id:83b432fe-19c2-4f5e-8c25-93600bb5c8e9,client:172.19.144.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (01-Apr-2024 10:28:00.596) (total time: 594ms):
	Trace[342156194]: ["List(recursive=true) etcd3" audit-id:83b432fe-19c2-4f5e-8c25-93600bb5c8e9,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 595ms (10:28:00.596)]
	Trace[342156194]: [594.445904ms] [594.445904ms] END
	I0401 10:28:34.684043       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0401 10:29:03.805985       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0401 10:29:14.659908       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.188.190"}
	E0401 10:29:19.785452       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 172.19.148.231:8443->10.244.0.29:48036: read: connection reset by peer
	
	
	==> kube-controller-manager [a7d5dfa5b617] <==
	I0401 10:28:14.156657       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0401 10:28:19.678451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="182.199µs"
	I0401 10:28:22.904130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="27.613245ms"
	I0401 10:28:22.904683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="80.399µs"
	I0401 10:28:23.898508       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0401 10:28:23.920474       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 10:28:24.289844       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 10:28:24.780154       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 10:28:25.410333       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 10:28:32.784678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="26.312721ms"
	I0401 10:28:32.786459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="79.8µs"
	I0401 10:28:38.006657       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 10:28:40.411689       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 10:28:43.798071       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 10:28:45.490252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-75d6c48ddd" duration="4.4µs"
	I0401 10:28:58.110357       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="7.8µs"
	I0401 10:29:08.360792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5446596998" duration="12.199µs"
	I0401 10:29:12.922622       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0401 10:29:13.350540       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I0401 10:29:14.802522       1 event.go:376] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-5b77dbd7c4 to 1"
	I0401 10:29:14.854141       1 event.go:376] "Event occurred" object="headlamp/headlamp-5b77dbd7c4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-5b77dbd7c4-9jr89"
	I0401 10:29:14.893512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="111.28248ms"
	I0401 10:29:14.965226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="71.279722ms"
	I0401 10:29:14.965407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="91.497µs"
	I0401 10:29:15.070069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="89.4µs"
	
	
	==> kube-proxy [2cbcbae89ce1] <==
	I0401 10:25:24.901713       1 server_others.go:72] "Using iptables proxy"
	I0401 10:25:24.974265       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.148.231"]
	I0401 10:25:25.259762       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 10:25:25.259962       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 10:25:25.260041       1 server_others.go:168] "Using iptables Proxier"
	I0401 10:25:25.276706       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 10:25:25.277708       1 server.go:865] "Version info" version="v1.29.3"
	I0401 10:25:25.277759       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 10:25:25.292678       1 config.go:188] "Starting service config controller"
	I0401 10:25:25.292736       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 10:25:25.292778       1 config.go:97] "Starting endpoint slice config controller"
	I0401 10:25:25.292880       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 10:25:25.294155       1 config.go:315] "Starting node config controller"
	I0401 10:25:25.294207       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 10:25:25.393043       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 10:25:25.401103       1 shared_informer.go:318] Caches are synced for service config
	I0401 10:25:25.401849       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [76530ebb9c31] <==
	W0401 10:24:55.164423       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 10:24:55.164842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0401 10:24:55.245744       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 10:24:55.246314       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 10:24:55.365735       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 10:24:55.366142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 10:24:55.396099       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 10:24:55.396334       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 10:24:55.432737       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 10:24:55.432772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 10:24:55.444259       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 10:24:55.444288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 10:24:55.585753       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 10:24:55.585808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 10:24:55.631122       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 10:24:55.631233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 10:24:55.703311       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 10:24:55.703348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 10:24:55.718497       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 10:24:55.718752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 10:24:55.722742       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 10:24:55.723162       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 10:24:55.761829       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 10:24:55.761880       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0401 10:24:57.588014       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 10:29:15 addons-852800 kubelet[2848]: I0401 10:29:15.623288    2848 scope.go:117] "RemoveContainer" containerID="5bd635af1a6eebd326a41f9962c52ced0be4a2f817ae26d39cbed909b784ae8f"
	Apr 01 10:29:15 addons-852800 kubelet[2848]: I0401 10:29:15.678273    2848 scope.go:117] "RemoveContainer" containerID="5bd635af1a6eebd326a41f9962c52ced0be4a2f817ae26d39cbed909b784ae8f"
	Apr 01 10:29:15 addons-852800 kubelet[2848]: E0401 10:29:15.680501    2848 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 5bd635af1a6eebd326a41f9962c52ced0be4a2f817ae26d39cbed909b784ae8f" containerID="5bd635af1a6eebd326a41f9962c52ced0be4a2f817ae26d39cbed909b784ae8f"
	Apr 01 10:29:15 addons-852800 kubelet[2848]: I0401 10:29:15.680647    2848 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"5bd635af1a6eebd326a41f9962c52ced0be4a2f817ae26d39cbed909b784ae8f"} err="failed to get container status \"5bd635af1a6eebd326a41f9962c52ced0be4a2f817ae26d39cbed909b784ae8f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 5bd635af1a6eebd326a41f9962c52ced0be4a2f817ae26d39cbed909b784ae8f"
	Apr 01 10:29:16 addons-852800 kubelet[2848]: I0401 10:29:16.708513    2848 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c56baf-9231-4177-83d9-1c37738b8de9" path="/var/lib/kubelet/pods/61c56baf-9231-4177-83d9-1c37738b8de9/volumes"
	Apr 01 10:29:16 addons-852800 kubelet[2848]: I0401 10:29:16.713533    2848 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c010f8-2faf-4300-be66-a9bd3220c56d" path="/var/lib/kubelet/pods/b9c010f8-2faf-4300-be66-a9bd3220c56d/volumes"
	Apr 01 10:29:16 addons-852800 kubelet[2848]: I0401 10:29:16.720950    2848 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda515fa-d789-4218-ad1f-2854a75c3981" path="/var/lib/kubelet/pods/eda515fa-d789-4218-ad1f-2854a75c3981/volumes"
	Apr 01 10:29:19 addons-852800 kubelet[2848]: I0401 10:29:19.508943    2848 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Apr 01 10:29:19 addons-852800 kubelet[2848]: I0401 10:29:19.542148    2848 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/helm-test" podStartSLOduration=3.330185268 podStartE2EDuration="6.542097725s" podCreationTimestamp="2024-04-01 10:29:13 +0000 UTC" firstStartedPulling="2024-04-01 10:29:15.282053094 +0000 UTC m=+256.956063291" lastFinishedPulling="2024-04-01 10:29:18.493965551 +0000 UTC m=+260.167975748" observedRunningTime="2024-04-01 10:29:19.540373584 +0000 UTC m=+261.214383881" watchObservedRunningTime="2024-04-01 10:29:19.542097725 +0000 UTC m=+261.216108022"
	Apr 01 10:29:19 addons-852800 kubelet[2848]: E0401 10:29:19.612257    2848 remote_runtime.go:557] "Attach container from runtime service failed" err="rpc error: code = InvalidArgument desc = tty and stderr cannot both be true" containerID="bfdc9ce58c8bbc613545477b1b168e126811f72a136e336b3bdffd64cbb73ec3"
	Apr 01 10:29:20 addons-852800 kubelet[2848]: I0401 10:29:20.689296    2848 scope.go:117] "RemoveContainer" containerID="d4227f7311073a3fcb4a535c6114cd24c2326b18c3a9b55239bebfcae085f723"
	Apr 01 10:29:23 addons-852800 kubelet[2848]: I0401 10:29:23.607734    2848 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcmwk\" (UniqueName: \"kubernetes.io/projected/9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb-kube-api-access-kcmwk\") pod \"9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb\" (UID: \"9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb\") "
	Apr 01 10:29:23 addons-852800 kubelet[2848]: I0401 10:29:23.611082    2848 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb-kube-api-access-kcmwk" (OuterVolumeSpecName: "kube-api-access-kcmwk") pod "9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb" (UID: "9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb"). InnerVolumeSpecName "kube-api-access-kcmwk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 01 10:29:23 addons-852800 kubelet[2848]: I0401 10:29:23.709428    2848 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kcmwk\" (UniqueName: \"kubernetes.io/projected/9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb-kube-api-access-kcmwk\") on node \"addons-852800\" DevicePath \"\""
	Apr 01 10:29:23 addons-852800 kubelet[2848]: I0401 10:29:23.787344    2848 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553ddf8f98f846afa628c11d64acc8caefdcf28e4e7fe3ae3fb4eec5563d43ca"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: I0401 10:29:24.063711    2848 topology_manager.go:215] "Topology Admit Handler" podUID="41554ac4-ff15-4e6f-bf0c-61d975bbd159" podNamespace="kube-system" podName="helm-test"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: E0401 10:29:24.064617    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c56baf-9231-4177-83d9-1c37738b8de9" containerName="liveness-probe"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: E0401 10:29:24.064658    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c56baf-9231-4177-83d9-1c37738b8de9" containerName="csi-provisioner"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: E0401 10:29:24.064671    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c56baf-9231-4177-83d9-1c37738b8de9" containerName="csi-snapshotter"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: E0401 10:29:24.064683    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb" containerName="helm-test"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: E0401 10:29:24.064695    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c56baf-9231-4177-83d9-1c37738b8de9" containerName="csi-external-health-monitor-controller"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: E0401 10:29:24.064718    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c56baf-9231-4177-83d9-1c37738b8de9" containerName="node-driver-registrar"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: E0401 10:29:24.064726    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c56baf-9231-4177-83d9-1c37738b8de9" containerName="hostpath"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: I0401 10:29:24.064778    2848 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0be05e-8d17-4e0b-9773-b01bdd8bb8eb" containerName="helm-test"
	Apr 01 10:29:24 addons-852800 kubelet[2848]: I0401 10:29:24.120649    2848 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrdj\" (UniqueName: \"kubernetes.io/projected/41554ac4-ff15-4e6f-bf0c-61d975bbd159-kube-api-access-nkrdj\") pod \"helm-test\" (UID: \"41554ac4-ff15-4e6f-bf0c-61d975bbd159\") " pod="kube-system/helm-test"
	
	
	==> storage-provisioner [7da37f06e902] <==
	I0401 10:25:44.753170       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 10:25:44.810421       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 10:25:44.810473       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 10:25:45.025732       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 10:25:45.026009       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-852800_a3d85404-c909-4346-b55b-bb4046004c86!
	I0401 10:25:45.037833       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a44952e-f79f-4679-bc1a-77f6b4c8b61c", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-852800_a3d85404-c909-4346-b55b-bb4046004c86 became leader
	I0401 10:25:45.126305       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-852800_a3d85404-c909-4346-b55b-bb4046004c86!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:29:13.082824    6572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-852800 -n addons-852800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-852800 -n addons-852800: (13.8059576s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-852800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-wnt5t ingress-nginx-admission-patch-8r25b
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-852800 describe pod ingress-nginx-admission-create-wnt5t ingress-nginx-admission-patch-8r25b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-852800 describe pod ingress-nginx-admission-create-wnt5t ingress-nginx-admission-patch-8r25b: exit status 1 (173.5453ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wnt5t" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8r25b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-852800 describe pod ingress-nginx-admission-create-wnt5t ingress-nginx-admission-patch-8r25b: exit status 1
--- FAIL: TestAddons/parallel/Registry (77.00s)

                                                
                                    
x
+
TestErrorSpam/setup (204.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-189500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 --driver=hyperv
E0401 10:33:23.431884    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:23.447526    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:23.463331    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:23.495082    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:23.542088    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:23.638260    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:23.813451    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:24.148601    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:24.804311    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:26.098614    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:28.662193    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:33.786605    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:33:44.028728    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:34:04.516655    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:34:45.477713    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:36:07.402923    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-189500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 --driver=hyperv: (3m24.3331969s)
error_spam_test.go:96: unexpected stderr: "W0401 10:33:01.859183    6004 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-189500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=18551
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-189500" primary control-plane node in "nospam-189500" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-189500" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0401 10:33:01.859183    6004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (204.33s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (347.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-706500 --alsologtostderr -v=8
E0401 10:43:23.422739    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-706500 --alsologtostderr -v=8: exit status 90 (2m33.3528816s)

                                                
                                                
-- stdout --
	* [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	* Updating the running hyperv "functional-706500" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:43:09.997203   13224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 10:43:10.072128   13224 out.go:291] Setting OutFile to fd 716 ...
	I0401 10:43:10.073323   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.073384   13224 out.go:304] Setting ErrFile to fd 712...
	I0401 10:43:10.073384   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.097024   13224 out.go:298] Setting JSON to false
	I0401 10:43:10.100726   13224 start.go:129] hostinfo: {"hostname":"minikube6","uptime":310948,"bootTime":1711657241,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:43:10.100838   13224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:43:10.105172   13224 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:43:10.107554   13224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:43:10.107554   13224 notify.go:220] Checking for updates...
	I0401 10:43:10.112799   13224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:43:10.115283   13224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:43:10.117610   13224 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:43:10.121040   13224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:43:10.124505   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:10.124505   13224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:43:15.710035   13224 out.go:177] * Using the hyperv driver based on existing profile
	I0401 10:43:15.713423   13224 start.go:297] selected driver: hyperv
	I0401 10:43:15.713545   13224 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.713861   13224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:43:15.770030   13224 cni.go:84] Creating CNI manager for ""
	I0401 10:43:15.770109   13224 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:43:15.770329   13224 start.go:340] cluster config:
	{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.770329   13224 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:43:15.777998   13224 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	I0401 10:43:15.780145   13224 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:43:15.780237   13224 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 10:43:15.780237   13224 cache.go:56] Caching tarball of preloaded images
	I0401 10:43:15.780237   13224 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 10:43:15.780237   13224 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 10:43:15.780237   13224 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
	I0401 10:43:15.783414   13224 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 10:43:15.783414   13224 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-706500"
	I0401 10:43:15.784054   13224 start.go:96] Skipping create...Using existing machine configuration
	I0401 10:43:15.784054   13224 fix.go:54] fixHost starting: 
	I0401 10:43:15.785053   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:18.643052   13224 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
	W0401 10:43:18.643052   13224 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 10:43:18.646761   13224 out.go:177] * Updating the running hyperv "functional-706500" VM ...
	I0401 10:43:18.648735   13224 machine.go:94] provisionDockerMachine start ...
	I0401 10:43:18.648735   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:20.929729   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:23.605499   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:23.605829   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:23.611863   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:23.612379   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:23.612379   13224 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 10:43:23.742743   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:23.742861   13224 buildroot.go:166] provisioning hostname "functional-706500"
	I0401 10:43:23.742998   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:25.983575   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:28.656268   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:28.656268   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:28.656268   13224 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
	I0401 10:43:28.820899   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:28.821051   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:31.040178   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:33.703614   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:33.704376   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:33.709326   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:33.710241   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:33.710241   13224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 10:43:33.843134   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:43:33.843192   13224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 10:43:33.843284   13224 buildroot.go:174] setting up certificates
	I0401 10:43:33.843348   13224 provision.go:84] configureAuth start
	I0401 10:43:33.843416   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:36.090770   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:38.722800   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:40.943290   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:40.943497   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:40.943706   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:43.583483   13224 provision.go:143] copyHostCerts
	I0401 10:43:43.584428   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 10:43:43.584791   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 10:43:43.584791   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 10:43:43.585153   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 10:43:43.586561   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 10:43:43.586884   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 10:43:43.586884   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 10:43:43.587236   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 10:43:43.588288   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 10:43:43.588425   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 10:43:43.589822   13224 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
	I0401 10:43:43.806505   13224 provision.go:177] copyRemoteCerts
	I0401 10:43:43.818752   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 10:43:43.819653   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:46.035972   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:48.717231   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:43:48.824115   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0033951s)
	I0401 10:43:48.824185   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 10:43:48.824251   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 10:43:48.875490   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 10:43:48.875659   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 10:43:48.928156   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 10:43:48.928156   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 10:43:48.979923   13224 provision.go:87] duration metric: took 15.1364681s to configureAuth
	I0401 10:43:48.980191   13224 buildroot.go:189] setting minikube options for container-runtime
	I0401 10:43:48.980337   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:48.980929   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:51.164916   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:53.799230   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:53.799230   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:53.799230   13224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 10:43:53.939680   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 10:43:53.939680   13224 buildroot.go:70] root file system type: tmpfs
	I0401 10:43:53.939937   13224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 10:43:53.940027   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:58.817419   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:58.817500   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:58.817500   13224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 10:43:58.984176   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 10:43:58.984176   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:01.191660   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:03.881634   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:03.881634   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:03.881634   13224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 10:44:04.035399   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:44:04.035399   13224 machine.go:97] duration metric: took 45.3863416s to provisionDockerMachine
	I0401 10:44:04.035734   13224 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
	I0401 10:44:04.035734   13224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 10:44:04.052038   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 10:44:04.052038   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:06.304118   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:09.045318   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:09.151365   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0992303s)
	I0401 10:44:09.165345   13224 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 10:44:09.173410   13224 command_runner.go:130] > NAME=Buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 10:44:09.173410   13224 command_runner.go:130] > ID=buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 10:44:09.173410   13224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 10:44:09.173410   13224 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 10:44:09.173410   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 10:44:09.174038   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 10:44:09.175293   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 10:44:09.175293   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 10:44:09.176304   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
	I0401 10:44:09.176304   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> /etc/test/nested/copy/1260/hosts
	I0401 10:44:09.189471   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
	I0401 10:44:09.209890   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 10:44:09.266189   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
	I0401 10:44:09.335015   13224 start.go:296] duration metric: took 5.2992427s for postStartSetup
	I0401 10:44:09.335234   13224 fix.go:56] duration metric: took 53.550799s for fixHost
	I0401 10:44:09.335234   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:11.525913   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:14.194016   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:14.194016   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:14.194565   13224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 10:44:14.343852   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711968254.338870760
	
	I0401 10:44:14.343852   13224 fix.go:216] guest clock: 1711968254.338870760
	I0401 10:44:14.343852   13224 fix.go:229] Guest: 2024-04-01 10:44:14.33887076 +0000 UTC Remote: 2024-04-01 10:44:09.335234 +0000 UTC m=+59.451830901 (delta=5.00363676s)
	I0401 10:44:14.344039   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:16.578130   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:19.271412   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:19.271412   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:19.271412   13224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711968254
	I0401 10:44:19.442247   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 10:44:14 UTC 2024
	
	I0401 10:44:19.442303   13224 fix.go:236] clock set: Mon Apr  1 10:44:14 UTC 2024
	 (err=<nil>)
	I0401 10:44:19.442303   13224 start.go:83] releasing machines lock for "functional-706500", held for 1m3.6584368s
	I0401 10:44:19.442621   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:21.638421   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:24.247802   13224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 10:44:24.247971   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:24.260879   13224 ssh_runner.go:195] Run: cat /version.json
	I0401 10:44:24.260879   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:29.395439   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.396663   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.396737   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.417726   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.538631   13224 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2907196s)
	I0401 10:44:29.538631   13224 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: cat /version.json: (5.2777142s)
	I0401 10:44:29.550754   13224 ssh_runner.go:195] Run: systemctl --version
	I0401 10:44:29.560248   13224 command_runner.go:130] > systemd 252 (252)
	I0401 10:44:29.560248   13224 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 10:44:29.575260   13224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 10:44:29.584117   13224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 10:44:29.584848   13224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 10:44:29.596050   13224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 10:44:29.618740   13224 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 10:44:29.618740   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:29.619282   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:29.653783   13224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0401 10:44:29.667491   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 10:44:29.699929   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 10:44:29.719747   13224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 10:44:29.731685   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 10:44:29.769559   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.808138   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 10:44:29.839942   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.872793   13224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 10:44:29.905536   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 10:44:29.943322   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 10:44:29.976065   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 10:44:30.009202   13224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 10:44:30.027840   13224 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 10:44:30.041084   13224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 10:44:30.074413   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:30.372540   13224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 10:44:30.410347   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:30.423188   13224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 10:44:30.448708   13224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0401 10:44:30.448805   13224 command_runner.go:130] > [Unit]
	I0401 10:44:30.448846   13224 command_runner.go:130] > Description=Docker Application Container Engine
	I0401 10:44:30.448846   13224 command_runner.go:130] > Documentation=https://docs.docker.com
	I0401 10:44:30.448846   13224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0401 10:44:30.448846   13224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0401 10:44:30.448846   13224 command_runner.go:130] > StartLimitBurst=3
	I0401 10:44:30.448960   13224 command_runner.go:130] > StartLimitIntervalSec=60
	I0401 10:44:30.448960   13224 command_runner.go:130] > [Service]
	I0401 10:44:30.448960   13224 command_runner.go:130] > Type=notify
	I0401 10:44:30.449175   13224 command_runner.go:130] > Restart=on-failure
	I0401 10:44:30.449254   13224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0401 10:44:30.449254   13224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0401 10:44:30.449324   13224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0401 10:44:30.449324   13224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0401 10:44:30.449324   13224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0401 10:44:30.449324   13224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0401 10:44:30.449324   13224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0401 10:44:30.449424   13224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0401 10:44:30.449463   13224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0401 10:44:30.449493   13224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNOFILE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNPROC=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitCORE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0401 10:44:30.449493   13224 command_runner.go:130] > TasksMax=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > TimeoutStartSec=0
	I0401 10:44:30.449493   13224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0401 10:44:30.449493   13224 command_runner.go:130] > Delegate=yes
	I0401 10:44:30.449493   13224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0401 10:44:30.449493   13224 command_runner.go:130] > KillMode=process
	I0401 10:44:30.449493   13224 command_runner.go:130] > [Install]
	I0401 10:44:30.449493   13224 command_runner.go:130] > WantedBy=multi-user.target
	I0401 10:44:30.462236   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.498715   13224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 10:44:30.555736   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.592141   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 10:44:30.614828   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:30.648516   13224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0401 10:44:30.661442   13224 ssh_runner.go:195] Run: which cri-dockerd
	I0401 10:44:30.667057   13224 command_runner.go:130] > /usr/bin/cri-dockerd
	I0401 10:44:30.680368   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 10:44:30.698489   13224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 10:44:30.744122   13224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 10:44:31.017163   13224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 10:44:31.281375   13224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 10:44:31.281375   13224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 10:44:31.335768   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:31.610524   13224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 10:45:43.060669   13224 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0401 10:45:43.060669   13224 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0401 10:45:43.063059   13224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4520277s)
	I0401 10:45:43.077122   13224 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.111111   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.111139   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.111214   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112397   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112470   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112542   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112898   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.115799   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.116447   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.118558   13224 command_runner.go:130] > Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	I0401 10:45:43.118619   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.118656   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.118786   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.118818   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.118867   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.118899   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119018   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119050   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122002   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	I0401 10:45:43.152344   13224 out.go:177] 
	W0401 10:45:43.155037   13224 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 10:45:43.156924   13224 out.go:239] * 
	* 
	W0401 10:45:43.158470   13224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 10:45:43.165740   13224 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-706500 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 2m33.9546768s for "functional-706500" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500: exit status 2 (12.3746554s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:45:43.944627   13308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 logs -n 25
E0401 10:48:23.424929    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 logs -n 25: (2m47.8265304s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| ip      | addons-852800 ip                                                      | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	| addons  | addons-852800 addons disable                                          | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |                |                     |                     |
	|         | -v=1                                                                  |                   |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                          | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                          | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:31 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |                |                     |                     |
	|         | -v=1                                                                  |                   |                   |                |                     |                     |
	| stop    | -p addons-852800                                                      | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:32 UTC |
	| addons  | enable dashboard -p                                                   | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                         |                   |                   |                |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                         |                   |                   |                |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                         |                   |                   |                |                     |                     |
	| delete  | -p addons-852800                                                      | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:33 UTC |
	| start   | -p nospam-189500 -n=1 --memory=2250 --wait=false                      | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:33 UTC | 01 Apr 24 10:36 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | start --dry-run                                                       |                   |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | start --dry-run                                                       |                   |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | start --dry-run                                                       |                   |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | pause                                                                 |                   |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | pause                                                                 |                   |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | pause                                                                 |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | unpause                                                               |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | unpause                                                               |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | unpause                                                               |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | stop                                                                  |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | stop                                                                  |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | stop                                                                  |                   |                   |                |                     |                     |
	| delete  | -p nospam-189500                                                      | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	| start   | -p functional-706500                                                  | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:43 UTC |
	|         | --memory=4000                                                         |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                                  | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:43 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |                |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:43:10
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:43:10.072128   13224 out.go:291] Setting OutFile to fd 716 ...
	I0401 10:43:10.073323   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.073384   13224 out.go:304] Setting ErrFile to fd 712...
	I0401 10:43:10.073384   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.097024   13224 out.go:298] Setting JSON to false
	I0401 10:43:10.100726   13224 start.go:129] hostinfo: {"hostname":"minikube6","uptime":310948,"bootTime":1711657241,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:43:10.100838   13224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:43:10.105172   13224 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:43:10.107554   13224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:43:10.107554   13224 notify.go:220] Checking for updates...
	I0401 10:43:10.112799   13224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:43:10.115283   13224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:43:10.117610   13224 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:43:10.121040   13224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:43:10.124505   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:10.124505   13224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:43:15.710035   13224 out.go:177] * Using the hyperv driver based on existing profile
	I0401 10:43:15.713423   13224 start.go:297] selected driver: hyperv
	I0401 10:43:15.713545   13224 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.713861   13224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:43:15.770030   13224 cni.go:84] Creating CNI manager for ""
	I0401 10:43:15.770109   13224 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:43:15.770329   13224 start.go:340] cluster config:
	{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.770329   13224 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:43:15.777998   13224 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	I0401 10:43:15.780145   13224 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:43:15.780237   13224 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 10:43:15.780237   13224 cache.go:56] Caching tarball of preloaded images
	I0401 10:43:15.780237   13224 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 10:43:15.780237   13224 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 10:43:15.780237   13224 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
	I0401 10:43:15.783414   13224 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 10:43:15.783414   13224 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-706500"
	I0401 10:43:15.784054   13224 start.go:96] Skipping create...Using existing machine configuration
	I0401 10:43:15.784054   13224 fix.go:54] fixHost starting: 
	I0401 10:43:15.785053   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:18.643052   13224 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
	W0401 10:43:18.643052   13224 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 10:43:18.646761   13224 out.go:177] * Updating the running hyperv "functional-706500" VM ...
	I0401 10:43:18.648735   13224 machine.go:94] provisionDockerMachine start ...
	I0401 10:43:18.648735   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:20.929729   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:23.605499   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:23.605829   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:23.611863   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:23.612379   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:23.612379   13224 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 10:43:23.742743   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:23.742861   13224 buildroot.go:166] provisioning hostname "functional-706500"
	I0401 10:43:23.742998   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:25.983575   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:28.656268   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:28.656268   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:28.656268   13224 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
	I0401 10:43:28.820899   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:28.821051   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:31.040178   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:33.703614   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:33.704376   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:33.709326   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:33.710241   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:33.710241   13224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 10:43:33.843134   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:43:33.843192   13224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 10:43:33.843284   13224 buildroot.go:174] setting up certificates
	I0401 10:43:33.843348   13224 provision.go:84] configureAuth start
	I0401 10:43:33.843416   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:36.090770   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:38.722800   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:40.943290   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:40.943497   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:40.943706   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:43.583483   13224 provision.go:143] copyHostCerts
	I0401 10:43:43.584428   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 10:43:43.584791   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 10:43:43.584791   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 10:43:43.585153   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 10:43:43.586561   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 10:43:43.586884   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 10:43:43.586884   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 10:43:43.587236   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 10:43:43.588288   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 10:43:43.588425   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 10:43:43.589822   13224 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
	I0401 10:43:43.806505   13224 provision.go:177] copyRemoteCerts
	I0401 10:43:43.818752   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 10:43:43.819653   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:46.035972   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:48.717231   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:43:48.824115   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0033951s)
	I0401 10:43:48.824185   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 10:43:48.824251   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 10:43:48.875490   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 10:43:48.875659   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 10:43:48.928156   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 10:43:48.928156   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 10:43:48.979923   13224 provision.go:87] duration metric: took 15.1364681s to configureAuth
	I0401 10:43:48.980191   13224 buildroot.go:189] setting minikube options for container-runtime
	I0401 10:43:48.980337   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:48.980929   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:51.164916   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:53.799230   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:53.799230   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:53.799230   13224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 10:43:53.939680   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 10:43:53.939680   13224 buildroot.go:70] root file system type: tmpfs
	I0401 10:43:53.939937   13224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 10:43:53.940027   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:58.817419   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:58.817500   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:58.817500   13224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 10:43:58.984176   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 10:43:58.984176   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:01.191660   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:03.881634   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:03.881634   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:03.881634   13224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 10:44:04.035399   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:44:04.035399   13224 machine.go:97] duration metric: took 45.3863416s to provisionDockerMachine
	I0401 10:44:04.035734   13224 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
	I0401 10:44:04.035734   13224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 10:44:04.052038   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 10:44:04.052038   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:06.304118   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:09.045318   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:09.151365   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0992303s)
	I0401 10:44:09.165345   13224 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 10:44:09.173410   13224 command_runner.go:130] > NAME=Buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 10:44:09.173410   13224 command_runner.go:130] > ID=buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 10:44:09.173410   13224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 10:44:09.173410   13224 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 10:44:09.173410   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 10:44:09.174038   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 10:44:09.175293   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 10:44:09.175293   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 10:44:09.176304   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
	I0401 10:44:09.176304   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> /etc/test/nested/copy/1260/hosts
	I0401 10:44:09.189471   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
	I0401 10:44:09.209890   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 10:44:09.266189   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
	I0401 10:44:09.335015   13224 start.go:296] duration metric: took 5.2992427s for postStartSetup
	I0401 10:44:09.335234   13224 fix.go:56] duration metric: took 53.550799s for fixHost
	I0401 10:44:09.335234   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:11.525913   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:14.194016   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:14.194016   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:14.194565   13224 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 10:44:14.343852   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711968254.338870760
	
	I0401 10:44:14.343852   13224 fix.go:216] guest clock: 1711968254.338870760
	I0401 10:44:14.343852   13224 fix.go:229] Guest: 2024-04-01 10:44:14.33887076 +0000 UTC Remote: 2024-04-01 10:44:09.335234 +0000 UTC m=+59.451830901 (delta=5.00363676s)
	I0401 10:44:14.344039   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:16.578130   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:19.271412   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:19.271412   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:19.271412   13224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711968254
	I0401 10:44:19.442247   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 10:44:14 UTC 2024
	
	I0401 10:44:19.442303   13224 fix.go:236] clock set: Mon Apr  1 10:44:14 UTC 2024
	 (err=<nil>)
	I0401 10:44:19.442303   13224 start.go:83] releasing machines lock for "functional-706500", held for 1m3.6584368s
	I0401 10:44:19.442621   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:21.638421   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:24.247802   13224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 10:44:24.247971   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:24.260879   13224 ssh_runner.go:195] Run: cat /version.json
	I0401 10:44:24.260879   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:29.395439   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.396663   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.396737   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.417726   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.538631   13224 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2907196s)
	I0401 10:44:29.538631   13224 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: cat /version.json: (5.2777142s)
	I0401 10:44:29.550754   13224 ssh_runner.go:195] Run: systemctl --version
	I0401 10:44:29.560248   13224 command_runner.go:130] > systemd 252 (252)
	I0401 10:44:29.560248   13224 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 10:44:29.575260   13224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 10:44:29.584117   13224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 10:44:29.584848   13224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 10:44:29.596050   13224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 10:44:29.618740   13224 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 10:44:29.618740   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:29.619282   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:29.653783   13224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0401 10:44:29.667491   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 10:44:29.699929   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 10:44:29.719747   13224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 10:44:29.731685   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 10:44:29.769559   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.808138   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 10:44:29.839942   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.872793   13224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 10:44:29.905536   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 10:44:29.943322   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 10:44:29.976065   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 10:44:30.009202   13224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 10:44:30.027840   13224 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 10:44:30.041084   13224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 10:44:30.074413   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:30.372540   13224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 10:44:30.410347   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:30.423188   13224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 10:44:30.448708   13224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0401 10:44:30.448805   13224 command_runner.go:130] > [Unit]
	I0401 10:44:30.448846   13224 command_runner.go:130] > Description=Docker Application Container Engine
	I0401 10:44:30.448846   13224 command_runner.go:130] > Documentation=https://docs.docker.com
	I0401 10:44:30.448846   13224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0401 10:44:30.448846   13224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0401 10:44:30.448846   13224 command_runner.go:130] > StartLimitBurst=3
	I0401 10:44:30.448960   13224 command_runner.go:130] > StartLimitIntervalSec=60
	I0401 10:44:30.448960   13224 command_runner.go:130] > [Service]
	I0401 10:44:30.448960   13224 command_runner.go:130] > Type=notify
	I0401 10:44:30.449175   13224 command_runner.go:130] > Restart=on-failure
	I0401 10:44:30.449254   13224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0401 10:44:30.449254   13224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0401 10:44:30.449324   13224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0401 10:44:30.449324   13224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0401 10:44:30.449324   13224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0401 10:44:30.449324   13224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0401 10:44:30.449324   13224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0401 10:44:30.449424   13224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0401 10:44:30.449463   13224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0401 10:44:30.449493   13224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNOFILE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNPROC=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitCORE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0401 10:44:30.449493   13224 command_runner.go:130] > TasksMax=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > TimeoutStartSec=0
	I0401 10:44:30.449493   13224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0401 10:44:30.449493   13224 command_runner.go:130] > Delegate=yes
	I0401 10:44:30.449493   13224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0401 10:44:30.449493   13224 command_runner.go:130] > KillMode=process
	I0401 10:44:30.449493   13224 command_runner.go:130] > [Install]
	I0401 10:44:30.449493   13224 command_runner.go:130] > WantedBy=multi-user.target
	I0401 10:44:30.462236   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.498715   13224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 10:44:30.555736   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.592141   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 10:44:30.614828   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:30.648516   13224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0401 10:44:30.661442   13224 ssh_runner.go:195] Run: which cri-dockerd
	I0401 10:44:30.667057   13224 command_runner.go:130] > /usr/bin/cri-dockerd
	I0401 10:44:30.680368   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 10:44:30.698489   13224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 10:44:30.744122   13224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 10:44:31.017163   13224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 10:44:31.281375   13224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 10:44:31.281375   13224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 10:44:31.335768   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:31.610524   13224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 10:45:43.060669   13224 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0401 10:45:43.060669   13224 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0401 10:45:43.063059   13224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4520277s)
	I0401 10:45:43.077122   13224 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.111111   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.111139   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.111214   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112397   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112470   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112542   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112898   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.115799   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.116447   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.118558   13224 command_runner.go:130] > Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	I0401 10:45:43.118619   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.118656   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.118786   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.118818   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.118867   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.118899   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119018   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119050   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122002   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	I0401 10:45:43.152344   13224 out.go:177] 
	W0401 10:45:43.155037   13224 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 10:45:43.156924   13224 out.go:239] * 
	W0401 10:45:43.158470   13224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 10:45:43.165740   13224 out.go:177] 
	
	
	==> Docker <==
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 01 10:45:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:45:43 functional-706500 dockerd[5287]: time="2024-04-01T10:45:43.246929152Z" level=info msg="Starting up"
	Apr 01 10:46:43 functional-706500 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:46:43 functional-706500 cri-dockerd[1235]: time="2024-04-01T10:46:43Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:46:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 01 10:46:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:46:43 functional-706500 dockerd[5501]: time="2024-04-01T10:46:43.470357918Z" level=info msg="Starting up"
	Apr 01 10:47:43 functional-706500 dockerd[5501]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:47:43 functional-706500 cri-dockerd[1235]: time="2024-04-01T10:47:43Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 10:47:43 functional-706500 cri-dockerd[1235]: time="2024-04-01T10:47:43Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:47:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 01 10:47:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-01T10:47:45Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 10:42] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.115403] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.585531] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.211496] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.243279] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.862068] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.224563] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.212159] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.303367] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.779527] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.125264] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.334425] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.690157] systemd-fstab-generator[1533]: Ignoring "noauto" option for root device
	[  +7.791368] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.115789] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.847693] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.168038] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.042400] systemd-fstab-generator[3395]: Ignoring "noauto" option for root device
	[  +0.238049] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 10:43] kauditd_printk_skb: 65 callbacks suppressed
	[Apr 1 10:44] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.654601] systemd-fstab-generator[4704]: Ignoring "noauto" option for root device
	[  +0.265712] systemd-fstab-generator[4719]: Ignoring "noauto" option for root device
	[  +0.315556] systemd-fstab-generator[4734]: Ignoring "noauto" option for root device
	[  +5.367808] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 10:48:43 up 8 min,  0 users,  load average: 0.04, 0.22, 0.16
	Linux functional-706500 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 10:48:37 functional-706500 kubelet[2870]: E0401 10:48:37.683755    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 10:48:37 functional-706500 kubelet[2870]: E0401 10:48:37.685309    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 10:48:37 functional-706500 kubelet[2870]: E0401 10:48:37.686267    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 10:48:37 functional-706500 kubelet[2870]: E0401 10:48:37.686419    2870 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 01 10:48:38 functional-706500 kubelet[2870]: E0401 10:48:38.548614    2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused" interval="7s"
	Apr 01 10:48:39 functional-706500 kubelet[2870]: E0401 10:48:39.241851    2870 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m8.093977739s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.749409    2870 kubelet.go:2902] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750059    2870 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750294    2870 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750327    2870 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750157    2870 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750371    2870 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750416    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750468    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750617    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750652    2870 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750861    2870 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.750891    2870 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: I0401 10:48:43.750904    2870 image_gc_manager.go:207] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.751463    2870 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.751909    2870 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: I0401 10:48:43.751982    2870 image_gc_manager.go:215] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.754608    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.754639    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 01 10:48:43 functional-706500 kubelet[2870]: E0401 10:48:43.755585    2870 kubelet.go:1433] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:45:56.306593    7080 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 10:46:43.298478    7080 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:46:43.332406    7080 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:46:43.362071    7080 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:46:43.396075    7080 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:47:43.515419    7080 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:47:43.548059    7080 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:47:43.580995    7080 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:47:43.616351    7080 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500: exit status 2 (12.37663s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:48:44.663687    5504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-706500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (347.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (180.74s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-706500 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-706500 get po -A: exit status 1 (10.4325068s)

                                                
                                                
** stderr ** 
	E0401 10:48:59.261800    6804 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 10:49:01.372395    6804 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 10:49:03.422207    6804 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 10:49:05.462081    6804 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 10:49:07.503807    6804 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-706500 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"E0401 10:48:59.261800    6804 memcache.go:265] couldn't get current server API group list: Get \"https://172.19.145.71:8441/api?timeout=32s\": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.\nE0401 10:49:01.372395    6804 memcache.go:265] couldn't get current server API group list: Get \"https://172.19.145.71:8441/api?timeout=32s\": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.\nE0401 10:49:03.422207    6804 memcache.go:265] couldn't get current server API group list: Get \"https://172.19.145.71:8441/api?timeout=32s\": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.\nE0401 10:49:05.462081    6804 memcache.go:265] couldn't get current server API group list: Get \"https://172.19.145.71:8441/api?timeout=32s\": dial tcp 172.19.145.71:8441: connec
tex: No connection could be made because the target machine actively refused it.\nE0401 10:49:07.503807    6804 memcache.go:265] couldn't get current server API group list: Get \"https://172.19.145.71:8441/api?timeout=32s\": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.\nUnable to connect to the server: dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.\n"*: args "kubectl --context functional-706500 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-706500 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500: exit status 2 (12.2790644s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:49:07.628781    4492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 logs -n 25
E0401 10:49:46.628220    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 logs -n 25: (2m24.9592488s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| ip      | addons-852800 ip                                                      | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	| addons  | addons-852800 addons disable                                          | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |                |                     |                     |
	|         | -v=1                                                                  |                   |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                          | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                          | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:31 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |                |                     |                     |
	|         | -v=1                                                                  |                   |                   |                |                     |                     |
	| stop    | -p addons-852800                                                      | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:32 UTC |
	| addons  | enable dashboard -p                                                   | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                         |                   |                   |                |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                         |                   |                   |                |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                         |                   |                   |                |                     |                     |
	| delete  | -p addons-852800                                                      | addons-852800     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:33 UTC |
	| start   | -p nospam-189500 -n=1 --memory=2250 --wait=false                      | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:33 UTC | 01 Apr 24 10:36 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | start --dry-run                                                       |                   |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | start --dry-run                                                       |                   |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | start --dry-run                                                       |                   |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | pause                                                                 |                   |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | pause                                                                 |                   |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | pause                                                                 |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | unpause                                                               |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | unpause                                                               |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | unpause                                                               |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | stop                                                                  |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | stop                                                                  |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                               | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500           |                   |                   |                |                     |                     |
	|         | stop                                                                  |                   |                   |                |                     |                     |
	| delete  | -p nospam-189500                                                      | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	| start   | -p functional-706500                                                  | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:43 UTC |
	|         | --memory=4000                                                         |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                                  | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:43 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |                |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:43:10
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:43:10.072128   13224 out.go:291] Setting OutFile to fd 716 ...
	I0401 10:43:10.073323   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.073384   13224 out.go:304] Setting ErrFile to fd 712...
	I0401 10:43:10.073384   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.097024   13224 out.go:298] Setting JSON to false
	I0401 10:43:10.100726   13224 start.go:129] hostinfo: {"hostname":"minikube6","uptime":310948,"bootTime":1711657241,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:43:10.100838   13224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:43:10.105172   13224 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:43:10.107554   13224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:43:10.107554   13224 notify.go:220] Checking for updates...
	I0401 10:43:10.112799   13224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:43:10.115283   13224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:43:10.117610   13224 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:43:10.121040   13224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:43:10.124505   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:10.124505   13224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:43:15.710035   13224 out.go:177] * Using the hyperv driver based on existing profile
	I0401 10:43:15.713423   13224 start.go:297] selected driver: hyperv
	I0401 10:43:15.713545   13224 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.713861   13224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:43:15.770030   13224 cni.go:84] Creating CNI manager for ""
	I0401 10:43:15.770109   13224 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:43:15.770329   13224 start.go:340] cluster config:
	{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.770329   13224 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:43:15.777998   13224 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	I0401 10:43:15.780145   13224 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:43:15.780237   13224 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 10:43:15.780237   13224 cache.go:56] Caching tarball of preloaded images
	I0401 10:43:15.780237   13224 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 10:43:15.780237   13224 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 10:43:15.780237   13224 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
	I0401 10:43:15.783414   13224 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 10:43:15.783414   13224 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-706500"
	I0401 10:43:15.784054   13224 start.go:96] Skipping create...Using existing machine configuration
	I0401 10:43:15.784054   13224 fix.go:54] fixHost starting: 
	I0401 10:43:15.785053   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:18.643052   13224 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
	W0401 10:43:18.643052   13224 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 10:43:18.646761   13224 out.go:177] * Updating the running hyperv "functional-706500" VM ...
	I0401 10:43:18.648735   13224 machine.go:94] provisionDockerMachine start ...
	I0401 10:43:18.648735   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:20.929729   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:23.605499   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:23.605829   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:23.611863   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:23.612379   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:23.612379   13224 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 10:43:23.742743   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:23.742861   13224 buildroot.go:166] provisioning hostname "functional-706500"
	I0401 10:43:23.742998   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:25.983575   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:28.656268   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:28.656268   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:28.656268   13224 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
	I0401 10:43:28.820899   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:28.821051   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:31.040178   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:33.703614   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:33.704376   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:33.709326   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:33.710241   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:33.710241   13224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 10:43:33.843134   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:43:33.843192   13224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 10:43:33.843284   13224 buildroot.go:174] setting up certificates
	I0401 10:43:33.843348   13224 provision.go:84] configureAuth start
	I0401 10:43:33.843416   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:36.090770   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:38.722800   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:40.943290   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:40.943497   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:40.943706   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:43.583483   13224 provision.go:143] copyHostCerts
	I0401 10:43:43.584428   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 10:43:43.584791   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 10:43:43.584791   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 10:43:43.585153   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 10:43:43.586561   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 10:43:43.586884   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 10:43:43.586884   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 10:43:43.587236   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 10:43:43.588288   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 10:43:43.588425   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 10:43:43.589822   13224 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
	I0401 10:43:43.806505   13224 provision.go:177] copyRemoteCerts
	I0401 10:43:43.818752   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 10:43:43.819653   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:46.035972   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:48.717231   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:43:48.824115   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0033951s)
	I0401 10:43:48.824185   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 10:43:48.824251   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 10:43:48.875490   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 10:43:48.875659   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 10:43:48.928156   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 10:43:48.928156   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 10:43:48.979923   13224 provision.go:87] duration metric: took 15.1364681s to configureAuth
	I0401 10:43:48.980191   13224 buildroot.go:189] setting minikube options for container-runtime
	I0401 10:43:48.980337   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:48.980929   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:51.164916   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:53.799230   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:53.799230   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:53.799230   13224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 10:43:53.939680   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 10:43:53.939680   13224 buildroot.go:70] root file system type: tmpfs
	I0401 10:43:53.939937   13224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 10:43:53.940027   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:58.817419   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:58.817500   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:58.817500   13224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 10:43:58.984176   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 10:43:58.984176   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:01.191660   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:03.881634   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:03.881634   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:03.881634   13224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 10:44:04.035399   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:44:04.035399   13224 machine.go:97] duration metric: took 45.3863416s to provisionDockerMachine
	I0401 10:44:04.035734   13224 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
	I0401 10:44:04.035734   13224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 10:44:04.052038   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 10:44:04.052038   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:06.304118   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:09.045318   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:09.151365   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0992303s)
	I0401 10:44:09.165345   13224 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 10:44:09.173410   13224 command_runner.go:130] > NAME=Buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 10:44:09.173410   13224 command_runner.go:130] > ID=buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 10:44:09.173410   13224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 10:44:09.173410   13224 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 10:44:09.173410   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 10:44:09.174038   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 10:44:09.175293   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 10:44:09.175293   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 10:44:09.176304   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
	I0401 10:44:09.176304   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> /etc/test/nested/copy/1260/hosts
	I0401 10:44:09.189471   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
	I0401 10:44:09.209890   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 10:44:09.266189   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
	I0401 10:44:09.335015   13224 start.go:296] duration metric: took 5.2992427s for postStartSetup
	I0401 10:44:09.335234   13224 fix.go:56] duration metric: took 53.550799s for fixHost
	I0401 10:44:09.335234   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:11.525913   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:14.194016   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:14.194016   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:14.194565   13224 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 10:44:14.343852   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711968254.338870760
	
	I0401 10:44:14.343852   13224 fix.go:216] guest clock: 1711968254.338870760
	I0401 10:44:14.343852   13224 fix.go:229] Guest: 2024-04-01 10:44:14.33887076 +0000 UTC Remote: 2024-04-01 10:44:09.335234 +0000 UTC m=+59.451830901 (delta=5.00363676s)
	I0401 10:44:14.344039   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:16.578130   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:19.271412   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:19.271412   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:19.271412   13224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711968254
	I0401 10:44:19.442247   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 10:44:14 UTC 2024
	
	I0401 10:44:19.442303   13224 fix.go:236] clock set: Mon Apr  1 10:44:14 UTC 2024
	 (err=<nil>)
	I0401 10:44:19.442303   13224 start.go:83] releasing machines lock for "functional-706500", held for 1m3.6584368s
	I0401 10:44:19.442621   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:21.638421   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:24.247802   13224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 10:44:24.247971   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:24.260879   13224 ssh_runner.go:195] Run: cat /version.json
	I0401 10:44:24.260879   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:29.395439   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.396663   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.396737   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.417726   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.538631   13224 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2907196s)
	I0401 10:44:29.538631   13224 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: cat /version.json: (5.2777142s)
	I0401 10:44:29.550754   13224 ssh_runner.go:195] Run: systemctl --version
	I0401 10:44:29.560248   13224 command_runner.go:130] > systemd 252 (252)
	I0401 10:44:29.560248   13224 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 10:44:29.575260   13224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 10:44:29.584117   13224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 10:44:29.584848   13224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 10:44:29.596050   13224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 10:44:29.618740   13224 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 10:44:29.618740   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:29.619282   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:29.653783   13224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0401 10:44:29.667491   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 10:44:29.699929   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 10:44:29.719747   13224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 10:44:29.731685   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 10:44:29.769559   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.808138   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 10:44:29.839942   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.872793   13224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 10:44:29.905536   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 10:44:29.943322   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 10:44:29.976065   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 10:44:30.009202   13224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 10:44:30.027840   13224 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 10:44:30.041084   13224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 10:44:30.074413   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:30.372540   13224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 10:44:30.410347   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:30.423188   13224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 10:44:30.448708   13224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0401 10:44:30.448805   13224 command_runner.go:130] > [Unit]
	I0401 10:44:30.448846   13224 command_runner.go:130] > Description=Docker Application Container Engine
	I0401 10:44:30.448846   13224 command_runner.go:130] > Documentation=https://docs.docker.com
	I0401 10:44:30.448846   13224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0401 10:44:30.448846   13224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0401 10:44:30.448846   13224 command_runner.go:130] > StartLimitBurst=3
	I0401 10:44:30.448960   13224 command_runner.go:130] > StartLimitIntervalSec=60
	I0401 10:44:30.448960   13224 command_runner.go:130] > [Service]
	I0401 10:44:30.448960   13224 command_runner.go:130] > Type=notify
	I0401 10:44:30.449175   13224 command_runner.go:130] > Restart=on-failure
	I0401 10:44:30.449254   13224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0401 10:44:30.449254   13224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0401 10:44:30.449324   13224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0401 10:44:30.449324   13224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0401 10:44:30.449324   13224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0401 10:44:30.449324   13224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0401 10:44:30.449324   13224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0401 10:44:30.449424   13224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0401 10:44:30.449463   13224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0401 10:44:30.449493   13224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNOFILE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNPROC=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitCORE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0401 10:44:30.449493   13224 command_runner.go:130] > TasksMax=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > TimeoutStartSec=0
	I0401 10:44:30.449493   13224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0401 10:44:30.449493   13224 command_runner.go:130] > Delegate=yes
	I0401 10:44:30.449493   13224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0401 10:44:30.449493   13224 command_runner.go:130] > KillMode=process
	I0401 10:44:30.449493   13224 command_runner.go:130] > [Install]
	I0401 10:44:30.449493   13224 command_runner.go:130] > WantedBy=multi-user.target
	I0401 10:44:30.462236   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.498715   13224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 10:44:30.555736   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.592141   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 10:44:30.614828   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:30.648516   13224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0401 10:44:30.661442   13224 ssh_runner.go:195] Run: which cri-dockerd
	I0401 10:44:30.667057   13224 command_runner.go:130] > /usr/bin/cri-dockerd
	I0401 10:44:30.680368   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 10:44:30.698489   13224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 10:44:30.744122   13224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 10:44:31.017163   13224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 10:44:31.281375   13224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 10:44:31.281375   13224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 10:44:31.335768   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:31.610524   13224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 10:45:43.060669   13224 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0401 10:45:43.060669   13224 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0401 10:45:43.063059   13224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4520277s)
	I0401 10:45:43.077122   13224 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.111111   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.111139   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.111214   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112397   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112470   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112542   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112898   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.115799   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.116447   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.118558   13224 command_runner.go:130] > Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	I0401 10:45:43.118619   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.118656   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.118786   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.118818   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.118867   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.118899   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119018   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119050   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122002   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	I0401 10:45:43.152344   13224 out.go:177] 
	W0401 10:45:43.155037   13224 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 10:45:43.156924   13224 out.go:239] * 
	W0401 10:45:43.158470   13224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 10:45:43.165740   13224 out.go:177] 
	
	
	==> Docker <==
	Apr 01 10:47:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:47:43 functional-706500 dockerd[5669]: time="2024-04-01T10:47:43.721563433Z" level=info msg="Starting up"
	Apr 01 10:48:43 functional-706500 dockerd[5669]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:48:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 cri-dockerd[1235]: time="2024-04-01T10:48:43Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 01 10:48:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:48:43 functional-706500 dockerd[5837]: time="2024-04-01T10:48:43.970136068Z" level=info msg="Starting up"
	Apr 01 10:49:43 functional-706500 dockerd[5837]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:49:43 functional-706500 cri-dockerd[1235]: time="2024-04-01T10:49:43Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:49:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 01 10:49:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:49:44 functional-706500 dockerd[6096]: time="2024-04-01T10:49:44.210275023Z" level=info msg="Starting up"
	Apr 01 10:50:44 functional-706500 dockerd[6096]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:50:44 functional-706500 cri-dockerd[1235]: time="2024-04-01T10:50:44Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:50:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-01T10:50:46Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 10:42] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.115403] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.585531] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.211496] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.243279] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.862068] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.224563] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.212159] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.303367] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.779527] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.125264] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.334425] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.690157] systemd-fstab-generator[1533]: Ignoring "noauto" option for root device
	[  +7.791368] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.115789] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.847693] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.168038] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.042400] systemd-fstab-generator[3395]: Ignoring "noauto" option for root device
	[  +0.238049] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 10:43] kauditd_printk_skb: 65 callbacks suppressed
	[Apr 1 10:44] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.654601] systemd-fstab-generator[4704]: Ignoring "noauto" option for root device
	[  +0.265712] systemd-fstab-generator[4719]: Ignoring "noauto" option for root device
	[  +0.315556] systemd-fstab-generator[4734]: Ignoring "noauto" option for root device
	[  +5.367808] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 10:51:44 up 11 min,  0 users,  load average: 0.00, 0.11, 0.12
	Linux functional-706500 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 10:51:40 functional-706500 kubelet[2870]: E0401 10:51:40.564282    2870 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.19.145.71:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-706500.17c22213fb2135a9  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-706500,UID:9df4090af0216fb714c930802ab28762,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:functional-706500,},FirstTimestamp:2024-04-01 10:44:37.567190441 +0000 UTC m=+110.981142754,LastTimestamp:2024-04-01 10:44:37.567190441 +0000 UTC m=+110.981142754,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +00
00 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-706500,}"
	Apr 01 10:51:40 functional-706500 kubelet[2870]: E0401 10:51:40.608392    2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused" interval="7s"
	Apr 01 10:51:42 functional-706500 kubelet[2870]: E0401 10:51:42.450088    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?resourceVersion=0&timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 10:51:42 functional-706500 kubelet[2870]: E0401 10:51:42.451196    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 10:51:42 functional-706500 kubelet[2870]: E0401 10:51:42.452419    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 10:51:42 functional-706500 kubelet[2870]: E0401 10:51:42.453594    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 10:51:42 functional-706500 kubelet[2870]: E0401 10:51:42.454875    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 10:51:42 functional-706500 kubelet[2870]: E0401 10:51:42.454978    2870 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.273406    2870 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m13.125403962s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473055    2870 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473098    2870 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: I0401 10:51:44.473112    2870 image_gc_manager.go:215] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473191    2870 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473213    2870 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473237    2870 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473264    2870 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473286    2870 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473367    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473389    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473574    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473601    2870 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.473652    2870 kubelet.go:2902] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.474851    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.474879    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 01 10:51:44 functional-706500 kubelet[2870]: E0401 10:51:44.475089    2870 kubelet.go:1433] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:49:19.905000    8832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 10:49:44.015170    8832 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:49:44.048988    8832 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:49:44.084513    8832 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:49:44.114954    8832 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:49:44.145263    8832 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:50:44.252525    8832 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:50:44.284674    8832 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 10:50:44.315018    8832 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500: exit status 2 (12.6411321s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:51:45.313154   11652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-706500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (180.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-706500 ssh sudo crictl images: exit status 1 (11.5670771s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:58:47.560088   10524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-windows-amd64.exe -p functional-706500 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:58:47.560088   10524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (179.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-706500 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (47.5289946s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:58:59.119965    3260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-706500 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-706500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.6325687s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:59:46.654815    3700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 cache reload: (1m48.9149116s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-706500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.5865769s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:01:47.197668   12524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-windows-amd64.exe -p functional-706500 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (179.67s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (180.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 kubectl -- --context functional-706500 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-706500 kubectl -- --context functional-706500 get pods: exit status 1 (10.7447711s)

                                                
                                                
** stderr ** 
	W0401 11:05:01.204457    8556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 11:05:03.540536    2072 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 11:05:05.648865    2072 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 11:05:07.693564    2072 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 11:05:09.739051    2072 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 11:05:11.793142    2072 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-706500 kubectl -- --context functional-706500 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500: exit status 2 (12.453749s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:05:11.943082   13156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 logs -n 25
E0401 11:06:26.645842    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 logs -n 25: (2m24.4818622s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| pause   | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | pause                                                       |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| delete  | -p nospam-189500                                            | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	| start   | -p functional-706500                                        | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:43 UTC |
	|         | --memory=4000                                               |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                        | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:43 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                 | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:51 UTC | 01 Apr 24 10:53 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                 | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:53 UTC | 01 Apr 24 10:55 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                 | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:55 UTC | 01 Apr 24 10:57 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                 | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:57 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                 |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache delete                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                 |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	| ssh     | functional-706500 ssh sudo                                  | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | crictl images                                               |                   |                   |                |                     |                     |
	| ssh     | functional-706500                                           | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| ssh     | functional-706500 ssh                                       | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache reload                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC | 01 Apr 24 11:01 UTC |
	| ssh     | functional-706500 ssh                                       | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| kubectl | functional-706500 kubectl --                                | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:05 UTC |                     |
	|         | --context functional-706500                                 |                   |                   |                |                     |                     |
	|         | get pods                                                    |                   |                   |                |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:43:10
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:43:10.072128   13224 out.go:291] Setting OutFile to fd 716 ...
	I0401 10:43:10.073323   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.073384   13224 out.go:304] Setting ErrFile to fd 712...
	I0401 10:43:10.073384   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.097024   13224 out.go:298] Setting JSON to false
	I0401 10:43:10.100726   13224 start.go:129] hostinfo: {"hostname":"minikube6","uptime":310948,"bootTime":1711657241,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:43:10.100838   13224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:43:10.105172   13224 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:43:10.107554   13224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:43:10.107554   13224 notify.go:220] Checking for updates...
	I0401 10:43:10.112799   13224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:43:10.115283   13224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:43:10.117610   13224 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:43:10.121040   13224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:43:10.124505   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:10.124505   13224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:43:15.710035   13224 out.go:177] * Using the hyperv driver based on existing profile
	I0401 10:43:15.713423   13224 start.go:297] selected driver: hyperv
	I0401 10:43:15.713545   13224 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.713861   13224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:43:15.770030   13224 cni.go:84] Creating CNI manager for ""
	I0401 10:43:15.770109   13224 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:43:15.770329   13224 start.go:340] cluster config:
	{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.770329   13224 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:43:15.777998   13224 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	I0401 10:43:15.780145   13224 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:43:15.780237   13224 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 10:43:15.780237   13224 cache.go:56] Caching tarball of preloaded images
	I0401 10:43:15.780237   13224 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 10:43:15.780237   13224 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 10:43:15.780237   13224 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
	I0401 10:43:15.783414   13224 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 10:43:15.783414   13224 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-706500"
	I0401 10:43:15.784054   13224 start.go:96] Skipping create...Using existing machine configuration
	I0401 10:43:15.784054   13224 fix.go:54] fixHost starting: 
	I0401 10:43:15.785053   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:18.643052   13224 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
	W0401 10:43:18.643052   13224 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 10:43:18.646761   13224 out.go:177] * Updating the running hyperv "functional-706500" VM ...
	I0401 10:43:18.648735   13224 machine.go:94] provisionDockerMachine start ...
	I0401 10:43:18.648735   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:20.929729   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:23.605499   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:23.605829   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:23.611863   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:23.612379   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:23.612379   13224 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 10:43:23.742743   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:23.742861   13224 buildroot.go:166] provisioning hostname "functional-706500"
	I0401 10:43:23.742998   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:25.983575   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:28.656268   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:28.656268   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:28.656268   13224 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
	I0401 10:43:28.820899   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:28.821051   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:31.040178   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:33.703614   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:33.704376   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:33.709326   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:33.710241   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:33.710241   13224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 10:43:33.843134   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:43:33.843192   13224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 10:43:33.843284   13224 buildroot.go:174] setting up certificates
	I0401 10:43:33.843348   13224 provision.go:84] configureAuth start
	I0401 10:43:33.843416   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:36.090770   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:38.722800   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:40.943290   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:40.943497   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:40.943706   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:43.583483   13224 provision.go:143] copyHostCerts
	I0401 10:43:43.584428   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 10:43:43.584791   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 10:43:43.584791   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 10:43:43.585153   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 10:43:43.586561   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 10:43:43.586884   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 10:43:43.586884   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 10:43:43.587236   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 10:43:43.588288   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 10:43:43.588425   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 10:43:43.589822   13224 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
	I0401 10:43:43.806505   13224 provision.go:177] copyRemoteCerts
	I0401 10:43:43.818752   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 10:43:43.819653   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:46.035972   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:48.717231   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:43:48.824115   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0033951s)
	I0401 10:43:48.824185   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 10:43:48.824251   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 10:43:48.875490   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 10:43:48.875659   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 10:43:48.928156   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 10:43:48.928156   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 10:43:48.979923   13224 provision.go:87] duration metric: took 15.1364681s to configureAuth
	I0401 10:43:48.980191   13224 buildroot.go:189] setting minikube options for container-runtime
	I0401 10:43:48.980337   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:48.980929   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:51.164916   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:53.799230   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:53.799230   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:53.799230   13224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 10:43:53.939680   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 10:43:53.939680   13224 buildroot.go:70] root file system type: tmpfs
	I0401 10:43:53.939937   13224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 10:43:53.940027   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:58.817419   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:58.817500   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:58.817500   13224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 10:43:58.984176   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 10:43:58.984176   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:01.191660   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:03.881634   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:03.881634   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:03.881634   13224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 10:44:04.035399   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:44:04.035399   13224 machine.go:97] duration metric: took 45.3863416s to provisionDockerMachine
	I0401 10:44:04.035734   13224 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
	I0401 10:44:04.035734   13224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 10:44:04.052038   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 10:44:04.052038   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:06.304118   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:09.045318   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:09.151365   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0992303s)
	I0401 10:44:09.165345   13224 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 10:44:09.173410   13224 command_runner.go:130] > NAME=Buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 10:44:09.173410   13224 command_runner.go:130] > ID=buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 10:44:09.173410   13224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 10:44:09.173410   13224 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 10:44:09.173410   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 10:44:09.174038   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 10:44:09.175293   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 10:44:09.175293   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 10:44:09.176304   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
	I0401 10:44:09.176304   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> /etc/test/nested/copy/1260/hosts
	I0401 10:44:09.189471   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
	I0401 10:44:09.209890   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 10:44:09.266189   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
	I0401 10:44:09.335015   13224 start.go:296] duration metric: took 5.2992427s for postStartSetup
	I0401 10:44:09.335234   13224 fix.go:56] duration metric: took 53.550799s for fixHost
	I0401 10:44:09.335234   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:11.525913   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:14.194016   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:14.194016   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:14.194565   13224 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 10:44:14.343852   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711968254.338870760
	
	I0401 10:44:14.343852   13224 fix.go:216] guest clock: 1711968254.338870760
	I0401 10:44:14.343852   13224 fix.go:229] Guest: 2024-04-01 10:44:14.33887076 +0000 UTC Remote: 2024-04-01 10:44:09.335234 +0000 UTC m=+59.451830901 (delta=5.00363676s)
	I0401 10:44:14.344039   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:16.578130   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:19.271412   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:19.271412   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:19.271412   13224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711968254
	I0401 10:44:19.442247   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 10:44:14 UTC 2024
	
	I0401 10:44:19.442303   13224 fix.go:236] clock set: Mon Apr  1 10:44:14 UTC 2024
	 (err=<nil>)
	I0401 10:44:19.442303   13224 start.go:83] releasing machines lock for "functional-706500", held for 1m3.6584368s
	I0401 10:44:19.442621   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:21.638421   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:24.247802   13224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 10:44:24.247971   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:24.260879   13224 ssh_runner.go:195] Run: cat /version.json
	I0401 10:44:24.260879   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:29.395439   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.396663   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.396737   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.417726   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.538631   13224 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2907196s)
	I0401 10:44:29.538631   13224 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: cat /version.json: (5.2777142s)
	I0401 10:44:29.550754   13224 ssh_runner.go:195] Run: systemctl --version
	I0401 10:44:29.560248   13224 command_runner.go:130] > systemd 252 (252)
	I0401 10:44:29.560248   13224 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 10:44:29.575260   13224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 10:44:29.584117   13224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 10:44:29.584848   13224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 10:44:29.596050   13224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 10:44:29.618740   13224 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 10:44:29.618740   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:29.619282   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:29.653783   13224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0401 10:44:29.667491   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 10:44:29.699929   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 10:44:29.719747   13224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 10:44:29.731685   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 10:44:29.769559   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.808138   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 10:44:29.839942   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.872793   13224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 10:44:29.905536   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 10:44:29.943322   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 10:44:29.976065   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 10:44:30.009202   13224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 10:44:30.027840   13224 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 10:44:30.041084   13224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 10:44:30.074413   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:30.372540   13224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 10:44:30.410347   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:30.423188   13224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 10:44:30.448708   13224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0401 10:44:30.448805   13224 command_runner.go:130] > [Unit]
	I0401 10:44:30.448846   13224 command_runner.go:130] > Description=Docker Application Container Engine
	I0401 10:44:30.448846   13224 command_runner.go:130] > Documentation=https://docs.docker.com
	I0401 10:44:30.448846   13224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0401 10:44:30.448846   13224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0401 10:44:30.448846   13224 command_runner.go:130] > StartLimitBurst=3
	I0401 10:44:30.448960   13224 command_runner.go:130] > StartLimitIntervalSec=60
	I0401 10:44:30.448960   13224 command_runner.go:130] > [Service]
	I0401 10:44:30.448960   13224 command_runner.go:130] > Type=notify
	I0401 10:44:30.449175   13224 command_runner.go:130] > Restart=on-failure
	I0401 10:44:30.449254   13224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0401 10:44:30.449254   13224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0401 10:44:30.449324   13224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0401 10:44:30.449324   13224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0401 10:44:30.449324   13224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0401 10:44:30.449324   13224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0401 10:44:30.449324   13224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0401 10:44:30.449424   13224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0401 10:44:30.449463   13224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0401 10:44:30.449493   13224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNOFILE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNPROC=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitCORE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0401 10:44:30.449493   13224 command_runner.go:130] > TasksMax=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > TimeoutStartSec=0
	I0401 10:44:30.449493   13224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0401 10:44:30.449493   13224 command_runner.go:130] > Delegate=yes
	I0401 10:44:30.449493   13224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0401 10:44:30.449493   13224 command_runner.go:130] > KillMode=process
	I0401 10:44:30.449493   13224 command_runner.go:130] > [Install]
	I0401 10:44:30.449493   13224 command_runner.go:130] > WantedBy=multi-user.target
	I0401 10:44:30.462236   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.498715   13224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 10:44:30.555736   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.592141   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 10:44:30.614828   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:30.648516   13224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0401 10:44:30.661442   13224 ssh_runner.go:195] Run: which cri-dockerd
	I0401 10:44:30.667057   13224 command_runner.go:130] > /usr/bin/cri-dockerd
	I0401 10:44:30.680368   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 10:44:30.698489   13224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 10:44:30.744122   13224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 10:44:31.017163   13224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 10:44:31.281375   13224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 10:44:31.281375   13224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 10:44:31.335768   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:31.610524   13224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 10:45:43.060669   13224 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0401 10:45:43.060669   13224 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0401 10:45:43.063059   13224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4520277s)
	I0401 10:45:43.077122   13224 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.111111   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.111139   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.111214   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112397   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112470   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112542   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112898   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.115799   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.116447   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.118558   13224 command_runner.go:130] > Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	I0401 10:45:43.118619   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.118656   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.118786   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.118818   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.118867   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.118899   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119018   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119050   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122002   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	I0401 10:45:43.152344   13224 out.go:177] 
	W0401 10:45:43.155037   13224 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 10:45:43.156924   13224 out.go:239] * 
	W0401 10:45:43.158470   13224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 10:45:43.165740   13224 out.go:177] 
	
	
	==> Docker <==
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:04:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 01 11:04:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 cri-dockerd[1235]: W0401 11:04:47.860778    1235 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Apr 01 11:04:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:04:47 functional-706500 dockerd[8762]: time="2024-04-01T11:04:47.974793898Z" level=info msg="Starting up"
	Apr 01 11:05:48 functional-706500 dockerd[8762]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:05:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:05:48Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 01 11:05:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:05:48 functional-706500 dockerd[9019]: time="2024-04-01T11:05:48.218482190Z" level=info msg="Starting up"
	Apr 01 11:06:48 functional-706500 dockerd[9019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:06:48 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:06:48Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:06:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 01 11:06:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-01T11:06:50Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 10:42] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.115403] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.585531] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.211496] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.243279] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.862068] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.224563] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.212159] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.303367] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.779527] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.125264] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.334425] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.690157] systemd-fstab-generator[1533]: Ignoring "noauto" option for root device
	[  +7.791368] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.115789] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.847693] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.168038] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.042400] systemd-fstab-generator[3395]: Ignoring "noauto" option for root device
	[  +0.238049] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 10:43] kauditd_printk_skb: 65 callbacks suppressed
	[Apr 1 10:44] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.654601] systemd-fstab-generator[4704]: Ignoring "noauto" option for root device
	[  +0.265712] systemd-fstab-generator[4719]: Ignoring "noauto" option for root device
	[  +0.315556] systemd-fstab-generator[4734]: Ignoring "noauto" option for root device
	[  +5.367808] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:07:48 up 27 min,  0 users,  load average: 0.09, 0.06, 0.07
	Linux functional-706500 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 11:07:42 functional-706500 kubelet[2870]: E0401 11:07:42.172411    2870 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 01 11:07:44 functional-706500 kubelet[2870]: E0401 11:07:44.456937    2870 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 23m13.30905232s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 01 11:07:46 functional-706500 kubelet[2870]: E0401 11:07:46.951969    2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused" interval="7s"
	Apr 01 11:07:46 functional-706500 kubelet[2870]: E0401 11:07:46.989153    2870 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:07:46 functional-706500 kubelet[2870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:07:46 functional-706500 kubelet[2870]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:07:46 functional-706500 kubelet[2870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:07:46 functional-706500 kubelet[2870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:07:47 functional-706500 kubelet[2870]: E0401 11:07:47.814521    2870 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-706500.17c22214ff7a8a99\": dial tcp 172.19.145.71:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-706500.17c22214ff7a8a99  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-706500,UID:9df4090af0216fb714c930802ab28762,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.19.145.71:8441/readyz\": dial tcp 172.19.145.71:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-706500,},FirstTimestamp:2024-04-01 10:44:41.935121049 +0000 UTC m=+115.349073462,LastTimestam
p:2024-04-01 10:44:43.93619564 +0000 UTC m=+117.350148053,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-706500,}"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.499861    2870 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.499924    2870 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.499958    2870 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501282    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501309    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501428    2870 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501446    2870 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: I0401 11:07:48.501458    2870 image_gc_manager.go:215] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501484    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501503    2870 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501564    2870 kubelet.go:2902] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501593    2870 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.501610    2870 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.502543    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.502674    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 01 11:07:48 functional-706500 kubelet[2870]: E0401 11:07:48.504480    2870 kubelet.go:1433] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:05:24.402186   14292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 11:05:48.025312   14292 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:05:48.057426   14292 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:05:48.095316   14292 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:05:48.137030   14292 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:06:48.259655   14292 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:06:48.292848   14292 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:06:48.324649   14292 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:06:48.356521   14292 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500: exit status 2 (12.6088863s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:07:49.312875    9332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-706500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (180.72s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (180.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500: exit status 2 (12.2822669s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:08:01.925878    7240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 logs -n 25
E0401 11:08:23.432839    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 logs -n 25: (2m35.4301958s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| pause   | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | pause                                                       |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                     | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| delete  | -p nospam-189500                                            | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	| start   | -p functional-706500                                        | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:43 UTC |
	|         | --memory=4000                                               |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                        | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:43 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                 | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:51 UTC | 01 Apr 24 10:53 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                 | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:53 UTC | 01 Apr 24 10:55 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                 | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:55 UTC | 01 Apr 24 10:57 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                 | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:57 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                 |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache delete                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                 |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	| ssh     | functional-706500 ssh sudo                                  | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | crictl images                                               |                   |                   |                |                     |                     |
	| ssh     | functional-706500                                           | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| ssh     | functional-706500 ssh                                       | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache reload                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC | 01 Apr 24 11:01 UTC |
	| ssh     | functional-706500 ssh                                       | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| kubectl | functional-706500 kubectl --                                | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:05 UTC |                     |
	|         | --context functional-706500                                 |                   |                   |                |                     |                     |
	|         | get pods                                                    |                   |                   |                |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:43:10
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:43:10.072128   13224 out.go:291] Setting OutFile to fd 716 ...
	I0401 10:43:10.073323   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.073384   13224 out.go:304] Setting ErrFile to fd 712...
	I0401 10:43:10.073384   13224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:43:10.097024   13224 out.go:298] Setting JSON to false
	I0401 10:43:10.100726   13224 start.go:129] hostinfo: {"hostname":"minikube6","uptime":310948,"bootTime":1711657241,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:43:10.100838   13224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:43:10.105172   13224 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:43:10.107554   13224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:43:10.107554   13224 notify.go:220] Checking for updates...
	I0401 10:43:10.112799   13224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:43:10.115283   13224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:43:10.117610   13224 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:43:10.121040   13224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:43:10.124505   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:10.124505   13224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:43:15.710035   13224 out.go:177] * Using the hyperv driver based on existing profile
	I0401 10:43:15.713423   13224 start.go:297] selected driver: hyperv
	I0401 10:43:15.713545   13224 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.713861   13224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:43:15.770030   13224 cni.go:84] Creating CNI manager for ""
	I0401 10:43:15.770109   13224 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:43:15.770329   13224 start.go:340] cluster config:
	{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:43:15.770329   13224 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:43:15.777998   13224 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	I0401 10:43:15.780145   13224 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:43:15.780237   13224 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 10:43:15.780237   13224 cache.go:56] Caching tarball of preloaded images
	I0401 10:43:15.780237   13224 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 10:43:15.780237   13224 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 10:43:15.780237   13224 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
	I0401 10:43:15.783414   13224 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 10:43:15.783414   13224 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-706500"
	I0401 10:43:15.784054   13224 start.go:96] Skipping create...Using existing machine configuration
	I0401 10:43:15.784054   13224 fix.go:54] fixHost starting: 
	I0401 10:43:15.785053   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:18.643052   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:18.643052   13224 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
	W0401 10:43:18.643052   13224 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 10:43:18.646761   13224 out.go:177] * Updating the running hyperv "functional-706500" VM ...
	I0401 10:43:18.648735   13224 machine.go:94] provisionDockerMachine start ...
	I0401 10:43:18.648735   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:20.929729   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:20.929791   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:23.605499   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:23.605829   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:23.611863   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:23.612379   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:23.612379   13224 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 10:43:23.742743   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:23.742861   13224 buildroot.go:166] provisioning hostname "functional-706500"
	I0401 10:43:23.742998   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:25.983079   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:25.983575   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:28.649895   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:28.656268   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:28.656268   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:28.656268   13224 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
	I0401 10:43:28.820899   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 10:43:28.821051   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:31.039114   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:31.040178   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:33.703614   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:33.704376   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:33.709326   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:33.710241   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:33.710241   13224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 10:43:33.843134   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:43:33.843192   13224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 10:43:33.843284   13224 buildroot.go:174] setting up certificates
	I0401 10:43:33.843348   13224 provision.go:84] configureAuth start
	I0401 10:43:33.843416   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:36.090006   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:36.090770   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:38.722372   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:38.722800   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:40.943290   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:40.943497   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:40.943706   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:43.583483   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:43.583483   13224 provision.go:143] copyHostCerts
	I0401 10:43:43.584428   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 10:43:43.584791   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 10:43:43.584791   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 10:43:43.585153   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 10:43:43.586561   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 10:43:43.586884   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 10:43:43.586884   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 10:43:43.587236   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 10:43:43.588288   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 10:43:43.588425   13224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 10:43:43.588425   13224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 10:43:43.589822   13224 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
	I0401 10:43:43.806505   13224 provision.go:177] copyRemoteCerts
	I0401 10:43:43.818752   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 10:43:43.819653   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:46.035972   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:46.036856   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:48.716338   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:48.717231   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:43:48.824115   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0033951s)
	I0401 10:43:48.824185   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 10:43:48.824251   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 10:43:48.875490   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 10:43:48.875659   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 10:43:48.928156   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 10:43:48.928156   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 10:43:48.979923   13224 provision.go:87] duration metric: took 15.1364681s to configureAuth
	I0401 10:43:48.980191   13224 buildroot.go:189] setting minikube options for container-runtime
	I0401 10:43:48.980337   13224 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 10:43:48.980929   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:51.164916   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:51.165900   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:53.792723   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:53.799230   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:53.799230   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:53.799230   13224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 10:43:53.939680   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 10:43:53.939680   13224 buildroot.go:70] root file system type: tmpfs
	I0401 10:43:53.939937   13224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 10:43:53.940027   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:56.125262   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:43:58.801336   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:43:58.817419   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:43:58.817500   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:43:58.817500   13224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 10:43:58.984176   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 10:43:58.984176   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:01.191510   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:01.191660   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:03.875481   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:03.881634   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:03.881634   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:03.881634   13224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 10:44:04.035399   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:44:04.035399   13224 machine.go:97] duration metric: took 45.3863416s to provisionDockerMachine
	I0401 10:44:04.035734   13224 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
	I0401 10:44:04.035734   13224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 10:44:04.052038   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 10:44:04.052038   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:06.303947   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:06.304118   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:09.044827   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:09.045318   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:09.151365   13224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0992303s)
	I0401 10:44:09.165345   13224 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 10:44:09.173410   13224 command_runner.go:130] > NAME=Buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 10:44:09.173410   13224 command_runner.go:130] > ID=buildroot
	I0401 10:44:09.173410   13224 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 10:44:09.173410   13224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 10:44:09.173410   13224 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 10:44:09.173410   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 10:44:09.174038   13224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 10:44:09.175293   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 10:44:09.175293   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 10:44:09.176304   13224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
	I0401 10:44:09.176304   13224 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> /etc/test/nested/copy/1260/hosts
	I0401 10:44:09.189471   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
	I0401 10:44:09.209890   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 10:44:09.266189   13224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
	I0401 10:44:09.335015   13224 start.go:296] duration metric: took 5.2992427s for postStartSetup
	I0401 10:44:09.335234   13224 fix.go:56] duration metric: took 53.550799s for fixHost
	I0401 10:44:09.335234   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:11.524893   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:11.525913   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:14.187737   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:14.194016   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:14.194016   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:14.194565   13224 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 10:44:14.343852   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711968254.338870760
	
	I0401 10:44:14.343852   13224 fix.go:216] guest clock: 1711968254.338870760
	I0401 10:44:14.343852   13224 fix.go:229] Guest: 2024-04-01 10:44:14.33887076 +0000 UTC Remote: 2024-04-01 10:44:09.335234 +0000 UTC m=+59.451830901 (delta=5.00363676s)
	I0401 10:44:14.344039   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:16.577948   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:16.578130   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:19.266047   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:19.271412   13224 main.go:141] libmachine: Using SSH client type: native
	I0401 10:44:19.271412   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 10:44:19.271412   13224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711968254
	I0401 10:44:19.442247   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 10:44:14 UTC 2024
	
	I0401 10:44:19.442303   13224 fix.go:236] clock set: Mon Apr  1 10:44:14 UTC 2024
	 (err=<nil>)
	I0401 10:44:19.442303   13224 start.go:83] releasing machines lock for "functional-706500", held for 1m3.6584368s
	I0401 10:44:19.442621   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:21.638421   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:21.638678   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:24.242979   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:24.247802   13224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 10:44:24.247971   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:24.260879   13224 ssh_runner.go:195] Run: cat /version.json
	I0401 10:44:24.260879   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:26.602726   13224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 10:44:29.395439   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.396663   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.396737   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 10:44:29.417380   13224 main.go:141] libmachine: [stderr =====>] : 
	I0401 10:44:29.417726   13224 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 10:44:29.538631   13224 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2907196s)
	I0401 10:44:29.538631   13224 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 10:44:29.538631   13224 ssh_runner.go:235] Completed: cat /version.json: (5.2777142s)
	I0401 10:44:29.550754   13224 ssh_runner.go:195] Run: systemctl --version
	I0401 10:44:29.560248   13224 command_runner.go:130] > systemd 252 (252)
	I0401 10:44:29.560248   13224 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 10:44:29.575260   13224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 10:44:29.584117   13224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 10:44:29.584848   13224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 10:44:29.596050   13224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 10:44:29.618740   13224 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 10:44:29.618740   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:29.619282   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:29.653783   13224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0401 10:44:29.667491   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 10:44:29.699929   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 10:44:29.719747   13224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 10:44:29.731685   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 10:44:29.769559   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.808138   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 10:44:29.839942   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:44:29.872793   13224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 10:44:29.905536   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 10:44:29.943322   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 10:44:29.976065   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 10:44:30.009202   13224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 10:44:30.027840   13224 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 10:44:30.041084   13224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 10:44:30.074413   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:30.372540   13224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 10:44:30.410347   13224 start.go:494] detecting cgroup driver to use...
	I0401 10:44:30.423188   13224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 10:44:30.448708   13224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0401 10:44:30.448805   13224 command_runner.go:130] > [Unit]
	I0401 10:44:30.448846   13224 command_runner.go:130] > Description=Docker Application Container Engine
	I0401 10:44:30.448846   13224 command_runner.go:130] > Documentation=https://docs.docker.com
	I0401 10:44:30.448846   13224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0401 10:44:30.448846   13224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0401 10:44:30.448846   13224 command_runner.go:130] > StartLimitBurst=3
	I0401 10:44:30.448960   13224 command_runner.go:130] > StartLimitIntervalSec=60
	I0401 10:44:30.448960   13224 command_runner.go:130] > [Service]
	I0401 10:44:30.448960   13224 command_runner.go:130] > Type=notify
	I0401 10:44:30.449175   13224 command_runner.go:130] > Restart=on-failure
	I0401 10:44:30.449254   13224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0401 10:44:30.449254   13224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0401 10:44:30.449324   13224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0401 10:44:30.449324   13224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0401 10:44:30.449324   13224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0401 10:44:30.449324   13224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0401 10:44:30.449324   13224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0401 10:44:30.449424   13224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0401 10:44:30.449463   13224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0401 10:44:30.449493   13224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0401 10:44:30.449493   13224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNOFILE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitNPROC=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > LimitCORE=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0401 10:44:30.449493   13224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0401 10:44:30.449493   13224 command_runner.go:130] > TasksMax=infinity
	I0401 10:44:30.449493   13224 command_runner.go:130] > TimeoutStartSec=0
	I0401 10:44:30.449493   13224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0401 10:44:30.449493   13224 command_runner.go:130] > Delegate=yes
	I0401 10:44:30.449493   13224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0401 10:44:30.449493   13224 command_runner.go:130] > KillMode=process
	I0401 10:44:30.449493   13224 command_runner.go:130] > [Install]
	I0401 10:44:30.449493   13224 command_runner.go:130] > WantedBy=multi-user.target
	I0401 10:44:30.462236   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.498715   13224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 10:44:30.555736   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 10:44:30.592141   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 10:44:30.614828   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:44:30.648516   13224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0401 10:44:30.661442   13224 ssh_runner.go:195] Run: which cri-dockerd
	I0401 10:44:30.667057   13224 command_runner.go:130] > /usr/bin/cri-dockerd
	I0401 10:44:30.680368   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 10:44:30.698489   13224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 10:44:30.744122   13224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 10:44:31.017163   13224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 10:44:31.281375   13224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 10:44:31.281375   13224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 10:44:31.335768   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:44:31.610524   13224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 10:45:43.060669   13224 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0401 10:45:43.060669   13224 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0401 10:45:43.063059   13224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4520277s)
	I0401 10:45:43.077122   13224 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.108958   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.109050   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.109094   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.109680   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.109852   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.109935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110012   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110146   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110225   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110318   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110423   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110499   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110614   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110708   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.110781   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.110863   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.110935   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	I0401 10:45:43.110969   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.111042   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.111111   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.111139   13224 command_runner.go:130] > Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.111214   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.111287   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.111359   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.111439   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	I0401 10:45:43.111510   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.111582   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111656   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.111804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.112061   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.112143   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.112218   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112316   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112397   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112470   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112542   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112641   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112804   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112865   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112898   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.112978   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.113006   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.113581   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.115799   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.115908   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.116447   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.116561   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 10:45:43.117153   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 10:45:43.117396   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	I0401 10:45:43.118488   13224 command_runner.go:130] > Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 10:45:43.118558   13224 command_runner.go:130] > Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	I0401 10:45:43.118619   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 10:45:43.118656   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 10:45:43.118736   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	I0401 10:45:43.118786   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 10:45:43.118818   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	I0401 10:45:43.118867   13224 command_runner.go:130] > Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	I0401 10:45:43.118899   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119018   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119050   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119189   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119310   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.119425   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.119982   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.121400   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122002   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.122046   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.123308   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.123572   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124168   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0401 10:45:43.124299   13224 command_runner.go:130] > Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	I0401 10:45:43.152344   13224 out.go:177] 
	W0401 10:45:43.155037   13224 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 10:45:43.156924   13224 out.go:239] * 
	W0401 10:45:43.158470   13224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 10:45:43.165740   13224 out.go:177] 
	
	
	==> Docker <==
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:07:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:07:48Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:07:48 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:07:48Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 01 11:07:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:07:48 functional-706500 dockerd[9356]: time="2024-04-01T11:07:48.729681187Z" level=info msg="Starting up"
	Apr 01 11:08:48 functional-706500 dockerd[9356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:08:48 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:08:48Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:08:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 01 11:08:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:08:48 functional-706500 dockerd[9619]: time="2024-04-01T11:08:48.961882907Z" level=info msg="Starting up"
	Apr 01 11:09:48 functional-706500 dockerd[9619]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:09:48 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:09:48Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:09:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 01 11:09:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-01T11:09:51Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 10:42] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.115403] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.585531] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.211496] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.243279] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.862068] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.224563] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.212159] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.303367] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.779527] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.125264] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.334425] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.690157] systemd-fstab-generator[1533]: Ignoring "noauto" option for root device
	[  +7.791368] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.115789] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.847693] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.168038] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.042400] systemd-fstab-generator[3395]: Ignoring "noauto" option for root device
	[  +0.238049] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 10:43] kauditd_printk_skb: 65 callbacks suppressed
	[Apr 1 10:44] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.654601] systemd-fstab-generator[4704]: Ignoring "noauto" option for root device
	[  +0.265712] systemd-fstab-generator[4719]: Ignoring "noauto" option for root device
	[  +0.315556] systemd-fstab-generator[4734]: Ignoring "noauto" option for root device
	[  +5.367808] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:10:49 up 30 min,  0 users,  load average: 0.00, 0.03, 0.05
	Linux functional-706500 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 11:10:45 functional-706500 kubelet[2870]: E0401 11:10:45.439615    2870 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-706500\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused"
	Apr 01 11:10:45 functional-706500 kubelet[2870]: E0401 11:10:45.439756    2870 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 01 11:10:45 functional-706500 kubelet[2870]: E0401 11:10:45.520743    2870 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/etcd-functional-706500.17c22213446146cc\": dial tcp 172.19.145.71:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-706500.17c22213446146cc  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-706500,UID:cb3166e027dbf49aaaaac8ecac66c5ee,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://127.0.0.1:2381/health?exclude=NOSPACE&serializable=true\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-706500,},FirstTimestamp:2024-04-01 10:44:34.501158604 +0000 UTC m=+107.915111017,LastTimestamp:2024-04-01 10:
44:44.49930071 +0000 UTC m=+117.913253123,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-706500,}"
	Apr 01 11:10:46 functional-706500 kubelet[2870]: E0401 11:10:46.989027    2870 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:10:46 functional-706500 kubelet[2870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:10:46 functional-706500 kubelet[2870]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:10:46 functional-706500 kubelet[2870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:10:46 functional-706500 kubelet[2870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.016809    2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused" interval="7s"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.243639    2870 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.243842    2870 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: I0401 11:10:49.244023    2870 image_gc_manager.go:215] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.244050    2870 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.244532    2870 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.246211    2870 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.244620    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.244665    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.247100    2870 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.243666    2870 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.247233    2870 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.246390    2870 kubelet.go:2902] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.247037    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.249243    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.249869    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 01 11:10:49 functional-706500 kubelet[2870]: E0401 11:10:49.250772    2870 kubelet.go:1433] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:08:14.211260   12268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 11:08:48.768003   12268 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:08:48.802106   12268 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:08:48.835587   12268 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:08:48.864337   12268 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:09:48.994619   12268 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:09:49.028049   12268 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:09:49.059468   12268 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:09:49.093010   12268 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500: exit status 2 (12.5275998s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:10:50.076366    7764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-706500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (180.68s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (301.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-706500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0401 11:13:23.448411    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-706500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m47.5474099s)

                                                
                                                
-- stdout --
	* [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	* Updating the running hyperv "functional-706500" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:11:02.602279   13928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 01 10:45:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:45:43 functional-706500 dockerd[5287]: time="2024-04-01T10:45:43.246929152Z" level=info msg="Starting up"
	Apr 01 10:46:43 functional-706500 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:46:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 01 10:46:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:46:43 functional-706500 dockerd[5501]: time="2024-04-01T10:46:43.470357918Z" level=info msg="Starting up"
	Apr 01 10:47:43 functional-706500 dockerd[5501]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:47:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 01 10:47:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:47:43 functional-706500 dockerd[5669]: time="2024-04-01T10:47:43.721563433Z" level=info msg="Starting up"
	Apr 01 10:48:43 functional-706500 dockerd[5669]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:48:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 01 10:48:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:48:43 functional-706500 dockerd[5837]: time="2024-04-01T10:48:43.970136068Z" level=info msg="Starting up"
	Apr 01 10:49:43 functional-706500 dockerd[5837]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:49:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 01 10:49:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:49:44 functional-706500 dockerd[6096]: time="2024-04-01T10:49:44.210275023Z" level=info msg="Starting up"
	Apr 01 10:50:44 functional-706500 dockerd[6096]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:50:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Apr 01 10:50:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:50:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:50:44 functional-706500 dockerd[6262]: time="2024-04-01T10:50:44.445595351Z" level=info msg="Starting up"
	Apr 01 10:51:44 functional-706500 dockerd[6262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:51:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Apr 01 10:51:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:51:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:51:44 functional-706500 dockerd[6427]: time="2024-04-01T10:51:44.713240118Z" level=info msg="Starting up"
	Apr 01 10:52:44 functional-706500 dockerd[6427]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:52:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Apr 01 10:52:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:52:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:52:44 functional-706500 dockerd[6633]: time="2024-04-01T10:52:44.963460969Z" level=info msg="Starting up"
	Apr 01 10:53:44 functional-706500 dockerd[6633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:53:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:53:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Apr 01 10:53:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:53:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:53:45 functional-706500 dockerd[6798]: time="2024-04-01T10:53:45.219771451Z" level=info msg="Starting up"
	Apr 01 10:54:45 functional-706500 dockerd[6798]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:54:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Apr 01 10:54:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:54:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:54:45 functional-706500 dockerd[6965]: time="2024-04-01T10:54:45.476248045Z" level=info msg="Starting up"
	Apr 01 10:55:45 functional-706500 dockerd[6965]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:55:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Apr 01 10:55:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:55:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:55:45 functional-706500 dockerd[7131]: time="2024-04-01T10:55:45.712231475Z" level=info msg="Starting up"
	Apr 01 10:56:45 functional-706500 dockerd[7131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:56:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Apr 01 10:56:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:56:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:56:45 functional-706500 dockerd[7303]: time="2024-04-01T10:56:45.972411662Z" level=info msg="Starting up"
	Apr 01 10:57:45 functional-706500 dockerd[7303]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:57:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:57:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Apr 01 10:57:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:57:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:57:46 functional-706500 dockerd[7458]: time="2024-04-01T10:57:46.228413043Z" level=info msg="Starting up"
	Apr 01 10:58:46 functional-706500 dockerd[7458]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:58:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 01 10:58:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:58:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:58:46 functional-706500 dockerd[7639]: time="2024-04-01T10:58:46.451545776Z" level=info msg="Starting up"
	Apr 01 10:59:46 functional-706500 dockerd[7639]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:59:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Apr 01 10:59:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:59:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:59:46 functional-706500 dockerd[7825]: time="2024-04-01T10:59:46.724851379Z" level=info msg="Starting up"
	Apr 01 11:00:46 functional-706500 dockerd[7825]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:00:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Apr 01 11:00:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:00:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:00:46 functional-706500 dockerd[8019]: time="2024-04-01T11:00:46.990871590Z" level=info msg="Starting up"
	Apr 01 11:01:47 functional-706500 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:01:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Apr 01 11:01:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:01:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:01:47 functional-706500 dockerd[8194]: time="2024-04-01T11:01:47.240553866Z" level=info msg="Starting up"
	Apr 01 11:02:47 functional-706500 dockerd[8194]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:02:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Apr 01 11:02:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:02:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:02:47 functional-706500 dockerd[8429]: time="2024-04-01T11:02:47.458291117Z" level=info msg="Starting up"
	Apr 01 11:03:47 functional-706500 dockerd[8429]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:03:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Apr 01 11:03:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:03:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:03:47 functional-706500 dockerd[8600]: time="2024-04-01T11:03:47.690450635Z" level=info msg="Starting up"
	Apr 01 11:04:47 functional-706500 dockerd[8600]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:04:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 01 11:04:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:04:47 functional-706500 dockerd[8762]: time="2024-04-01T11:04:47.974793898Z" level=info msg="Starting up"
	Apr 01 11:05:48 functional-706500 dockerd[8762]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:05:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 01 11:05:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:05:48 functional-706500 dockerd[9019]: time="2024-04-01T11:05:48.218482190Z" level=info msg="Starting up"
	Apr 01 11:06:48 functional-706500 dockerd[9019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:06:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 01 11:06:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:06:48 functional-706500 dockerd[9189]: time="2024-04-01T11:06:48.467778205Z" level=info msg="Starting up"
	Apr 01 11:07:48 functional-706500 dockerd[9189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:07:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 01 11:07:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:07:48 functional-706500 dockerd[9356]: time="2024-04-01T11:07:48.729681187Z" level=info msg="Starting up"
	Apr 01 11:08:48 functional-706500 dockerd[9356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:08:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 01 11:08:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:08:48 functional-706500 dockerd[9619]: time="2024-04-01T11:08:48.961882907Z" level=info msg="Starting up"
	Apr 01 11:09:48 functional-706500 dockerd[9619]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:09:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 01 11:09:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:09:49 functional-706500 dockerd[9784]: time="2024-04-01T11:09:49.212023599Z" level=info msg="Starting up"
	Apr 01 11:10:49 functional-706500 dockerd[9784]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:10:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Apr 01 11:10:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:10:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:10:49 functional-706500 dockerd[9947]: time="2024-04-01T11:10:49.471879307Z" level=info msg="Starting up"
	Apr 01 11:11:49 functional-706500 dockerd[9947]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:11:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
	Apr 01 11:11:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:11:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:11:49 functional-706500 dockerd[10267]: time="2024-04-01T11:11:49.718265627Z" level=info msg="Starting up"
	Apr 01 11:12:24 functional-706500 dockerd[10267]: time="2024-04-01T11:12:24.990418752Z" level=info msg="Processing signal 'terminated'"
	Apr 01 11:12:49 functional-706500 dockerd[10267]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:12:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:12:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:12:49 functional-706500 dockerd[10662]: time="2024-04-01T11:12:49.836764290Z" level=info msg="Starting up"
	Apr 01 11:13:49 functional-706500 dockerd[10662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:13:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-706500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 2m47.6975354s for "functional-706500" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500: exit status 2 (12.5536215s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:13:50.331914    3128 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 logs -n 25: (1m48.0509928s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| unpause | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | unpause                                                                  |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | unpause                                                                  |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | unpause                                                                  |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | stop                                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | stop                                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | stop                                                                     |                   |                   |                |                     |                     |
	| delete  | -p nospam-189500                                                         | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	| start   | -p functional-706500                                                     | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:43 UTC |
	|         | --memory=4000                                                            |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                                     | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:43 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:51 UTC | 01 Apr 24 10:53 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:53 UTC | 01 Apr 24 10:55 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:55 UTC | 01 Apr 24 10:57 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:57 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                              |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache delete                                           | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                              |                   |                   |                |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |                |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	| ssh     | functional-706500 ssh sudo                                               | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | crictl images                                                            |                   |                   |                |                     |                     |
	| ssh     | functional-706500                                                        | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| ssh     | functional-706500 ssh                                                    | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache reload                                           | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC | 01 Apr 24 11:01 UTC |
	| ssh     | functional-706500 ssh                                                    | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |                |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| kubectl | functional-706500 kubectl --                                             | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:05 UTC |                     |
	|         | --context functional-706500                                              |                   |                   |                |                     |                     |
	|         | get pods                                                                 |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                                     | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:11 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |                |                     |                     |
	|         | --wait=all                                                               |                   |                   |                |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 11:11:02
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 11:11:02.675408   13928 out.go:291] Setting OutFile to fd 864 ...
	I0401 11:11:02.677219   13928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:11:02.677219   13928 out.go:304] Setting ErrFile to fd 1008...
	I0401 11:11:02.677219   13928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:11:02.702122   13928 out.go:298] Setting JSON to false
	I0401 11:11:02.706199   13928 start.go:129] hostinfo: {"hostname":"minikube6","uptime":312621,"bootTime":1711657241,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 11:11:02.706199   13928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 11:11:02.714712   13928 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 11:11:02.718618   13928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:11:02.718618   13928 notify.go:220] Checking for updates...
	I0401 11:11:02.720771   13928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 11:11:02.723643   13928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 11:11:02.727188   13928 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 11:11:02.728921   13928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 11:11:02.731796   13928 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:11:02.732867   13928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 11:11:08.299905   13928 out.go:177] * Using the hyperv driver based on existing profile
	I0401 11:11:08.301941   13928 start.go:297] selected driver: hyperv
	I0401 11:11:08.301941   13928 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:11:08.301941   13928 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 11:11:08.352980   13928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:11:08.352980   13928 cni.go:84] Creating CNI manager for ""
	I0401 11:11:08.352980   13928 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 11:11:08.352980   13928 start.go:340] cluster config:
	{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:11:08.353717   13928 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 11:11:08.358695   13928 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	I0401 11:11:08.360937   13928 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:11:08.360937   13928 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 11:11:08.360937   13928 cache.go:56] Caching tarball of preloaded images
	I0401 11:11:08.361555   13928 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:11:08.361555   13928 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:11:08.361555   13928 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
	I0401 11:11:08.364025   13928 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:11:08.364637   13928 start.go:364] duration metric: took 612.4µs to acquireMachinesLock for "functional-706500"
	I0401 11:11:08.364637   13928 start.go:96] Skipping create...Using existing machine configuration
	I0401 11:11:08.364637   13928 fix.go:54] fixHost starting: 
	I0401 11:11:08.365259   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:11.219191   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:11.219191   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:11.219191   13928 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
	W0401 11:11:11.219191   13928 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 11:11:11.224961   13928 out.go:177] * Updating the running hyperv "functional-706500" VM ...
	I0401 11:11:11.227211   13928 machine.go:94] provisionDockerMachine start ...
	I0401 11:11:11.227211   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:13.436978   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:13.437051   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:13.437051   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:16.016234   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:16.016234   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:16.022101   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:16.022735   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:16.022735   13928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:11:16.166363   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 11:11:16.166363   13928 buildroot.go:166] provisioning hostname "functional-706500"
	I0401 11:11:16.166363   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:18.351617   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:18.351861   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:18.351939   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:20.983006   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:20.983006   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:20.988905   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:20.989438   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:20.989438   13928 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
	I0401 11:11:21.150541   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 11:11:21.150541   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:23.340502   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:23.340882   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:23.340882   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:26.010381   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:26.010381   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:26.017280   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:26.017280   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:26.017280   13928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:11:26.148356   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:11:26.148356   13928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:11:26.148425   13928 buildroot.go:174] setting up certificates
	I0401 11:11:26.148425   13928 provision.go:84] configureAuth start
	I0401 11:11:26.148425   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:28.313151   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:28.313151   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:28.313690   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:30.949501   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:30.949501   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:30.949682   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:33.152310   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:33.152310   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:33.153208   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:35.840975   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:35.841190   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:35.841190   13928 provision.go:143] copyHostCerts
	I0401 11:11:35.841696   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:11:35.841696   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:11:35.842172   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:11:35.843643   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:11:35.843643   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:11:35.843959   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:11:35.845117   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:11:35.845117   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:11:35.845219   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:11:35.846465   13928 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
	I0401 11:11:36.004672   13928 provision.go:177] copyRemoteCerts
	I0401 11:11:36.018395   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:11:36.018615   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:40.901918   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:40.901918   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:40.902935   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:11:41.015623   13928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9971931s)
	I0401 11:11:41.015623   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 11:11:41.066242   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:11:41.112581   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 11:11:41.163579   13928 provision.go:87] duration metric: took 15.0150492s to configureAuth
	I0401 11:11:41.163579   13928 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:11:41.164283   13928 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:11:41.164366   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:45.977441   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:45.977441   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:45.982713   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:45.983473   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:45.983473   13928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:11:46.126333   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:11:46.126333   13928 buildroot.go:70] root file system type: tmpfs
	I0401 11:11:46.126632   13928 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:11:46.126702   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:50.968210   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:50.968210   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:50.975527   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:50.976107   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:50.976324   13928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:11:51.143276   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:11:51.143355   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:53.313983   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:53.313983   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:53.315008   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:55.944506   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:55.944506   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:55.952199   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:55.952984   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:55.952984   13928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:11:56.098851   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:11:56.098851   13928 machine.go:97] duration metric: took 44.8713258s to provisionDockerMachine
	I0401 11:11:56.098912   13928 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
	I0401 11:11:56.098912   13928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:11:56.112299   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:11:56.112299   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:58.314497   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:58.314497   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:58.314550   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:01.020981   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:01.021995   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:01.022193   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:01.123158   13928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.010824s)
	I0401 11:12:01.135833   13928 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:12:01.144329   13928 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:12:01.144329   13928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:12:01.145605   13928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:12:01.147606   13928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:12:01.148343   13928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
	I0401 11:12:01.161761   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
	I0401 11:12:01.191883   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:12:01.242289   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
	I0401 11:12:01.293425   13928 start.go:296] duration metric: took 5.1944768s for postStartSetup
	I0401 11:12:01.293551   13928 fix.go:56] duration metric: took 52.928543s for fixHost
	I0401 11:12:01.293609   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:06.113661   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:06.113926   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:06.119841   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:12:06.120607   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:12:06.120607   13928 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:12:06.256997   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711969926.251527594
	
	I0401 11:12:06.256997   13928 fix.go:216] guest clock: 1711969926.251527594
	I0401 11:12:06.256997   13928 fix.go:229] Guest: 2024-04-01 11:12:06.251527594 +0000 UTC Remote: 2024-04-01 11:12:01.2935512 +0000 UTC m=+58.789370401 (delta=4.957976394s)
	I0401 11:12:06.257089   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:08.474846   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:08.474908   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:08.474908   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:11.120345   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:11.120345   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:11.130054   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:12:11.130207   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:12:11.130207   13928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711969926
	I0401 11:12:11.274198   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:12:06 UTC 2024
	
	I0401 11:12:11.274198   13928 fix.go:236] clock set: Mon Apr  1 11:12:06 UTC 2024
	 (err=<nil>)
	I0401 11:12:11.274198   13928 start.go:83] releasing machines lock for "functional-706500", held for 1m2.90912s
	I0401 11:12:11.274396   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:16.095069   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:16.095069   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:16.098899   13928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:12:16.099055   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:16.110253   13928 ssh_runner.go:195] Run: cat /version.json
	I0401 11:12:16.110253   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:18.356456   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:18.357479   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:18.357605   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:21.024542   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:21.024944   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:21.025655   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:21.064561   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:21.065133   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:21.065201   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:21.132409   13928 ssh_runner.go:235] Completed: cat /version.json: (5.0219453s)
	I0401 11:12:21.146776   13928 ssh_runner.go:195] Run: systemctl --version
	I0401 11:12:23.162851   13928 ssh_runner.go:235] Completed: systemctl --version: (2.0160608s)
	I0401 11:12:23.162913   13928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.0639645s)
	W0401 11:12:23.162913   13928 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0401 11:12:23.162913   13928 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0401 11:12:23.162913   13928 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0401 11:12:23.175996   13928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 11:12:23.184884   13928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:12:23.197334   13928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:12:23.215553   13928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 11:12:23.215605   13928 start.go:494] detecting cgroup driver to use...
	I0401 11:12:23.215905   13928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:12:23.265017   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:12:23.295678   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:12:23.315671   13928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:12:23.327504   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:12:23.360952   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:12:23.394039   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:12:23.426344   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:12:23.456513   13928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:12:23.488100   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:12:23.519746   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:12:23.553091   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:12:23.592313   13928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:12:23.623585   13928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:12:23.653587   13928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:12:23.876161   13928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:12:23.912674   13928 start.go:494] detecting cgroup driver to use...
	I0401 11:12:23.925853   13928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:12:23.965699   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:12:24.001877   13928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:12:24.047136   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:12:24.087468   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:12:24.112075   13928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:12:24.157975   13928 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:12:24.177102   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:12:24.195928   13928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:12:24.240126   13928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:12:24.471761   13928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:12:24.701177   13928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:12:24.701468   13928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:12:24.749291   13928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:12:24.966635   13928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:13:49.885829   13928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m24.9186004s)
	I0401 11:13:49.899749   13928 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 11:13:49.985780   13928 out.go:177] 
	W0401 11:13:49.990428   13928 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 01 10:45:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:45:43 functional-706500 dockerd[5287]: time="2024-04-01T10:45:43.246929152Z" level=info msg="Starting up"
	Apr 01 10:46:43 functional-706500 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:46:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 01 10:46:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:46:43 functional-706500 dockerd[5501]: time="2024-04-01T10:46:43.470357918Z" level=info msg="Starting up"
	Apr 01 10:47:43 functional-706500 dockerd[5501]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:47:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 01 10:47:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:47:43 functional-706500 dockerd[5669]: time="2024-04-01T10:47:43.721563433Z" level=info msg="Starting up"
	Apr 01 10:48:43 functional-706500 dockerd[5669]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:48:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 01 10:48:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:48:43 functional-706500 dockerd[5837]: time="2024-04-01T10:48:43.970136068Z" level=info msg="Starting up"
	Apr 01 10:49:43 functional-706500 dockerd[5837]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:49:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 01 10:49:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:49:44 functional-706500 dockerd[6096]: time="2024-04-01T10:49:44.210275023Z" level=info msg="Starting up"
	Apr 01 10:50:44 functional-706500 dockerd[6096]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:50:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Apr 01 10:50:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:50:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:50:44 functional-706500 dockerd[6262]: time="2024-04-01T10:50:44.445595351Z" level=info msg="Starting up"
	Apr 01 10:51:44 functional-706500 dockerd[6262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:51:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Apr 01 10:51:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:51:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:51:44 functional-706500 dockerd[6427]: time="2024-04-01T10:51:44.713240118Z" level=info msg="Starting up"
	Apr 01 10:52:44 functional-706500 dockerd[6427]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:52:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Apr 01 10:52:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:52:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:52:44 functional-706500 dockerd[6633]: time="2024-04-01T10:52:44.963460969Z" level=info msg="Starting up"
	Apr 01 10:53:44 functional-706500 dockerd[6633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:53:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:53:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Apr 01 10:53:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:53:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:53:45 functional-706500 dockerd[6798]: time="2024-04-01T10:53:45.219771451Z" level=info msg="Starting up"
	Apr 01 10:54:45 functional-706500 dockerd[6798]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:54:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Apr 01 10:54:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:54:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:54:45 functional-706500 dockerd[6965]: time="2024-04-01T10:54:45.476248045Z" level=info msg="Starting up"
	Apr 01 10:55:45 functional-706500 dockerd[6965]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:55:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Apr 01 10:55:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:55:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:55:45 functional-706500 dockerd[7131]: time="2024-04-01T10:55:45.712231475Z" level=info msg="Starting up"
	Apr 01 10:56:45 functional-706500 dockerd[7131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:56:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Apr 01 10:56:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:56:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:56:45 functional-706500 dockerd[7303]: time="2024-04-01T10:56:45.972411662Z" level=info msg="Starting up"
	Apr 01 10:57:45 functional-706500 dockerd[7303]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:57:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:57:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Apr 01 10:57:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:57:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:57:46 functional-706500 dockerd[7458]: time="2024-04-01T10:57:46.228413043Z" level=info msg="Starting up"
	Apr 01 10:58:46 functional-706500 dockerd[7458]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:58:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 01 10:58:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:58:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:58:46 functional-706500 dockerd[7639]: time="2024-04-01T10:58:46.451545776Z" level=info msg="Starting up"
	Apr 01 10:59:46 functional-706500 dockerd[7639]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:59:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Apr 01 10:59:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:59:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:59:46 functional-706500 dockerd[7825]: time="2024-04-01T10:59:46.724851379Z" level=info msg="Starting up"
	Apr 01 11:00:46 functional-706500 dockerd[7825]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:00:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Apr 01 11:00:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:00:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:00:46 functional-706500 dockerd[8019]: time="2024-04-01T11:00:46.990871590Z" level=info msg="Starting up"
	Apr 01 11:01:47 functional-706500 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:01:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Apr 01 11:01:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:01:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:01:47 functional-706500 dockerd[8194]: time="2024-04-01T11:01:47.240553866Z" level=info msg="Starting up"
	Apr 01 11:02:47 functional-706500 dockerd[8194]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:02:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Apr 01 11:02:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:02:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:02:47 functional-706500 dockerd[8429]: time="2024-04-01T11:02:47.458291117Z" level=info msg="Starting up"
	Apr 01 11:03:47 functional-706500 dockerd[8429]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:03:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Apr 01 11:03:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:03:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:03:47 functional-706500 dockerd[8600]: time="2024-04-01T11:03:47.690450635Z" level=info msg="Starting up"
	Apr 01 11:04:47 functional-706500 dockerd[8600]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:04:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 01 11:04:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:04:47 functional-706500 dockerd[8762]: time="2024-04-01T11:04:47.974793898Z" level=info msg="Starting up"
	Apr 01 11:05:48 functional-706500 dockerd[8762]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:05:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 01 11:05:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:05:48 functional-706500 dockerd[9019]: time="2024-04-01T11:05:48.218482190Z" level=info msg="Starting up"
	Apr 01 11:06:48 functional-706500 dockerd[9019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:06:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 01 11:06:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:06:48 functional-706500 dockerd[9189]: time="2024-04-01T11:06:48.467778205Z" level=info msg="Starting up"
	Apr 01 11:07:48 functional-706500 dockerd[9189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:07:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 01 11:07:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:07:48 functional-706500 dockerd[9356]: time="2024-04-01T11:07:48.729681187Z" level=info msg="Starting up"
	Apr 01 11:08:48 functional-706500 dockerd[9356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:08:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 01 11:08:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:08:48 functional-706500 dockerd[9619]: time="2024-04-01T11:08:48.961882907Z" level=info msg="Starting up"
	Apr 01 11:09:48 functional-706500 dockerd[9619]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:09:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 01 11:09:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:09:49 functional-706500 dockerd[9784]: time="2024-04-01T11:09:49.212023599Z" level=info msg="Starting up"
	Apr 01 11:10:49 functional-706500 dockerd[9784]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:10:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Apr 01 11:10:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:10:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:10:49 functional-706500 dockerd[9947]: time="2024-04-01T11:10:49.471879307Z" level=info msg="Starting up"
	Apr 01 11:11:49 functional-706500 dockerd[9947]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:11:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
	Apr 01 11:11:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:11:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:11:49 functional-706500 dockerd[10267]: time="2024-04-01T11:11:49.718265627Z" level=info msg="Starting up"
	Apr 01 11:12:24 functional-706500 dockerd[10267]: time="2024-04-01T11:12:24.990418752Z" level=info msg="Processing signal 'terminated'"
	Apr 01 11:12:49 functional-706500 dockerd[10267]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:12:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:12:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:12:49 functional-706500 dockerd[10662]: time="2024-04-01T11:12:49.836764290Z" level=info msg="Starting up"
	Apr 01 11:13:49 functional-706500 dockerd[10662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:13:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 11:13:49.991137   13928 out.go:239] * 
	W0401 11:13:49.992691   13928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 11:13:49.996825   13928 out.go:177] 
	
	
	==> Docker <==
	Apr 01 11:12:49 functional-706500 dockerd[10267]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:12:49 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:12:49Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:12:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:12:49 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:12:49Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:12:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:12:49 functional-706500 dockerd[10662]: time="2024-04-01T11:12:49.836764290Z" level=info msg="Starting up"
	Apr 01 11:13:49 functional-706500 dockerd[10662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:13:49 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:13:49Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:13:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:13:50 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 01 11:13:50 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:13:50 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:13:50 functional-706500 dockerd[10812]: time="2024-04-01T11:13:50.111583723Z" level=info msg="Starting up"
	Apr 01 11:14:50 functional-706500 dockerd[10812]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:14:50 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:14:50 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:14:50 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:14:50 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:14:50Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:14:50 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 01 11:14:50 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:14:50 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-01T11:14:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.243279] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.862068] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.224563] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.212159] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.303367] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.779527] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.125264] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.334425] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.690157] systemd-fstab-generator[1533]: Ignoring "noauto" option for root device
	[  +7.791368] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.115789] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.847693] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.168038] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.042400] systemd-fstab-generator[3395]: Ignoring "noauto" option for root device
	[  +0.238049] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 10:43] kauditd_printk_skb: 65 callbacks suppressed
	[Apr 1 10:44] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.654601] systemd-fstab-generator[4704]: Ignoring "noauto" option for root device
	[  +0.265712] systemd-fstab-generator[4719]: Ignoring "noauto" option for root device
	[  +0.315556] systemd-fstab-generator[4734]: Ignoring "noauto" option for root device
	[  +5.367808] kauditd_printk_skb: 89 callbacks suppressed
	[Apr 1 11:12] systemd-fstab-generator[10536]: Ignoring "noauto" option for root device
	[  +0.604437] systemd-fstab-generator[10572]: Ignoring "noauto" option for root device
	[  +0.234871] systemd-fstab-generator[10584]: Ignoring "noauto" option for root device
	[  +0.260517] systemd-fstab-generator[10598]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 11:15:50 up 35 min,  0 users,  load average: 0.05, 0.06, 0.06
	Linux functional-706500 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 11:15:44 functional-706500 kubelet[2870]: E0401 11:15:44.545743    2870 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 31m13.395507458s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 01 11:15:46 functional-706500 kubelet[2870]: E0401 11:15:46.992102    2870 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:15:46 functional-706500 kubelet[2870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:15:46 functional-706500 kubelet[2870]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:15:46 functional-706500 kubelet[2870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:15:46 functional-706500 kubelet[2870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:15:48 functional-706500 kubelet[2870]: E0401 11:15:48.635198    2870 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.19.145.71:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-706500.17c22215d7937688  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-706500,UID:9df4090af0216fb714c930802ab28762,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://172.19.145.71:8441/livez\": dial tcp 172.19.145.71:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-706500,},FirstTimestamp:2024-04-01 10:44:45.560632968 +0000 UTC m=+118.974585381,LastTimestamp:2024-04-01 10:44:45.560632968 +0000 UTC m=+118.9745
85381,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-706500,}"
	Apr 01 11:15:49 functional-706500 kubelet[2870]: E0401 11:15:49.547228    2870 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 31m18.396994977s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.129398    2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-706500?timeout=10s\": dial tcp 172.19.145.71:8441: connect: connection refused" interval="7s"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.509438    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.510969    2870 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.509929    2870 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.511025    2870 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: I0401 11:15:50.511039    2870 image_gc_manager.go:215] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.509951    2870 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.511101    2870 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.510032    2870 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.511132    2870 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.511149    2870 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.510856    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.511173    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.510938    2870 kubelet.go:2902] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.515423    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.515530    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 01 11:15:50 functional-706500 kubelet[2870]: E0401 11:15:50.515961    2870 kubelet.go:1433] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:14:02.852845    1132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 11:14:50.151063    1132 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:14:50.182226    1132 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:14:50.212519    1132 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:14:50.245136    1132 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:14:50.277082    1132 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:14:50.309715    1132 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:14:50.339289    1132 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:14:50.368007    1132 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500: exit status 2 (12.4687627s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:15:51.287620    8180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-706500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (301.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (180.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-706500 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-706500 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (10.4306821s)

                                                
                                                
** stderr ** 
	E0401 11:16:05.812016    4828 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 11:16:07.927107    4828 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 11:16:09.973382    4828 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 11:16:12.021986    4828 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	E0401 11:16:14.066431    4828 memcache.go:265] couldn't get current server API group list: Get "https://172.19.145.71:8441/api?timeout=32s": dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.19.145.71:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-706500 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-706500 -n functional-706500: exit status 2 (12.4031559s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:16:14.188343    6588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 logs -n 25
E0401 11:18:23.443953    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 logs -n 25: (2m25.0341285s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| unpause | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | unpause                                                                  |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | unpause                                                                  |                   |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | unpause                                                                  |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | stop                                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | stop                                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                  | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500              |                   |                   |                |                     |                     |
	|         | stop                                                                     |                   |                   |                |                     |                     |
	| delete  | -p nospam-189500                                                         | nospam-189500     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	| start   | -p functional-706500                                                     | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:43 UTC |
	|         | --memory=4000                                                            |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                                     | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:43 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:51 UTC | 01 Apr 24 10:53 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:53 UTC | 01 Apr 24 10:55 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:55 UTC | 01 Apr 24 10:57 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                              | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:57 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                              |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache delete                                           | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                              |                   |                   |                |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |                |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	| ssh     | functional-706500 ssh sudo                                               | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | crictl images                                                            |                   |                   |                |                     |                     |
	| ssh     | functional-706500                                                        | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| ssh     | functional-706500 ssh                                                    | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| cache   | functional-706500 cache reload                                           | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC | 01 Apr 24 11:01 UTC |
	| ssh     | functional-706500 ssh                                                    | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |                |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| kubectl | functional-706500 kubectl --                                             | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:05 UTC |                     |
	|         | --context functional-706500                                              |                   |                   |                |                     |                     |
	|         | get pods                                                                 |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                                     | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:11 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |                |                     |                     |
	|         | --wait=all                                                               |                   |                   |                |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 11:11:02
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 11:11:02.675408   13928 out.go:291] Setting OutFile to fd 864 ...
	I0401 11:11:02.677219   13928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:11:02.677219   13928 out.go:304] Setting ErrFile to fd 1008...
	I0401 11:11:02.677219   13928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:11:02.702122   13928 out.go:298] Setting JSON to false
	I0401 11:11:02.706199   13928 start.go:129] hostinfo: {"hostname":"minikube6","uptime":312621,"bootTime":1711657241,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 11:11:02.706199   13928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 11:11:02.714712   13928 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 11:11:02.718618   13928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:11:02.718618   13928 notify.go:220] Checking for updates...
	I0401 11:11:02.720771   13928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 11:11:02.723643   13928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 11:11:02.727188   13928 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 11:11:02.728921   13928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 11:11:02.731796   13928 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:11:02.732867   13928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 11:11:08.299905   13928 out.go:177] * Using the hyperv driver based on existing profile
	I0401 11:11:08.301941   13928 start.go:297] selected driver: hyperv
	I0401 11:11:08.301941   13928 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:11:08.301941   13928 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 11:11:08.352980   13928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:11:08.352980   13928 cni.go:84] Creating CNI manager for ""
	I0401 11:11:08.352980   13928 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 11:11:08.352980   13928 start.go:340] cluster config:
	{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:11:08.353717   13928 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 11:11:08.358695   13928 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	I0401 11:11:08.360937   13928 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:11:08.360937   13928 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 11:11:08.360937   13928 cache.go:56] Caching tarball of preloaded images
	I0401 11:11:08.361555   13928 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:11:08.361555   13928 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:11:08.361555   13928 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
	I0401 11:11:08.364025   13928 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:11:08.364637   13928 start.go:364] duration metric: took 612.4µs to acquireMachinesLock for "functional-706500"
	I0401 11:11:08.364637   13928 start.go:96] Skipping create...Using existing machine configuration
	I0401 11:11:08.364637   13928 fix.go:54] fixHost starting: 
	I0401 11:11:08.365259   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:11.219191   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:11.219191   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:11.219191   13928 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
	W0401 11:11:11.219191   13928 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 11:11:11.224961   13928 out.go:177] * Updating the running hyperv "functional-706500" VM ...
	I0401 11:11:11.227211   13928 machine.go:94] provisionDockerMachine start ...
	I0401 11:11:11.227211   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:13.436978   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:13.437051   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:13.437051   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:16.016234   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:16.016234   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:16.022101   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:16.022735   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:16.022735   13928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:11:16.166363   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 11:11:16.166363   13928 buildroot.go:166] provisioning hostname "functional-706500"
	I0401 11:11:16.166363   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:18.351617   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:18.351861   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:18.351939   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:20.983006   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:20.983006   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:20.988905   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:20.989438   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:20.989438   13928 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
	I0401 11:11:21.150541   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 11:11:21.150541   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:23.340502   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:23.340882   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:23.340882   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:26.010381   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:26.010381   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:26.017280   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:26.017280   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:26.017280   13928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:11:26.148356   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:11:26.148356   13928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:11:26.148425   13928 buildroot.go:174] setting up certificates
	I0401 11:11:26.148425   13928 provision.go:84] configureAuth start
	I0401 11:11:26.148425   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:28.313151   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:28.313151   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:28.313690   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:30.949501   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:30.949501   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:30.949682   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:33.152310   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:33.152310   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:33.153208   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:35.840975   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:35.841190   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:35.841190   13928 provision.go:143] copyHostCerts
	I0401 11:11:35.841696   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:11:35.841696   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:11:35.842172   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:11:35.843643   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:11:35.843643   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:11:35.843959   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:11:35.845117   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:11:35.845117   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:11:35.845219   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:11:35.846465   13928 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
	I0401 11:11:36.004672   13928 provision.go:177] copyRemoteCerts
	I0401 11:11:36.018395   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:11:36.018615   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:40.901918   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:40.901918   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:40.902935   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:11:41.015623   13928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9971931s)
	I0401 11:11:41.015623   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 11:11:41.066242   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:11:41.112581   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 11:11:41.163579   13928 provision.go:87] duration metric: took 15.0150492s to configureAuth
	I0401 11:11:41.163579   13928 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:11:41.164283   13928 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:11:41.164366   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:45.977441   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:45.977441   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:45.982713   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:45.983473   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:45.983473   13928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:11:46.126333   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:11:46.126333   13928 buildroot.go:70] root file system type: tmpfs
	I0401 11:11:46.126632   13928 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:11:46.126702   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:50.968210   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:50.968210   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:50.975527   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:50.976107   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:50.976324   13928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:11:51.143276   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:11:51.143355   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:53.313983   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:53.313983   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:53.315008   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:55.944506   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:55.944506   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:55.952199   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:55.952984   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:55.952984   13928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:11:56.098851   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:11:56.098851   13928 machine.go:97] duration metric: took 44.8713258s to provisionDockerMachine
	I0401 11:11:56.098912   13928 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
	I0401 11:11:56.098912   13928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:11:56.112299   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:11:56.112299   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:58.314497   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:58.314497   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:58.314550   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:01.020981   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:01.021995   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:01.022193   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:01.123158   13928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.010824s)
	I0401 11:12:01.135833   13928 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:12:01.144329   13928 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:12:01.144329   13928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:12:01.145605   13928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:12:01.147606   13928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:12:01.148343   13928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
	I0401 11:12:01.161761   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
	I0401 11:12:01.191883   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:12:01.242289   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
	I0401 11:12:01.293425   13928 start.go:296] duration metric: took 5.1944768s for postStartSetup
	I0401 11:12:01.293551   13928 fix.go:56] duration metric: took 52.928543s for fixHost
	I0401 11:12:01.293609   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:06.113661   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:06.113926   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:06.119841   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:12:06.120607   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:12:06.120607   13928 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:12:06.256997   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711969926.251527594
	
	I0401 11:12:06.256997   13928 fix.go:216] guest clock: 1711969926.251527594
	I0401 11:12:06.256997   13928 fix.go:229] Guest: 2024-04-01 11:12:06.251527594 +0000 UTC Remote: 2024-04-01 11:12:01.2935512 +0000 UTC m=+58.789370401 (delta=4.957976394s)
	I0401 11:12:06.257089   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:08.474846   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:08.474908   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:08.474908   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:11.120345   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:11.120345   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:11.130054   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:12:11.130207   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:12:11.130207   13928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711969926
	I0401 11:12:11.274198   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:12:06 UTC 2024
	
	I0401 11:12:11.274198   13928 fix.go:236] clock set: Mon Apr  1 11:12:06 UTC 2024
	 (err=<nil>)
	I0401 11:12:11.274198   13928 start.go:83] releasing machines lock for "functional-706500", held for 1m2.90912s
	I0401 11:12:11.274396   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:16.095069   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:16.095069   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:16.098899   13928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:12:16.099055   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:16.110253   13928 ssh_runner.go:195] Run: cat /version.json
	I0401 11:12:16.110253   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:18.356456   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:18.357479   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:18.357605   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:21.024542   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:21.024944   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:21.025655   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:21.064561   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:21.065133   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:21.065201   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:21.132409   13928 ssh_runner.go:235] Completed: cat /version.json: (5.0219453s)
	I0401 11:12:21.146776   13928 ssh_runner.go:195] Run: systemctl --version
	I0401 11:12:23.162851   13928 ssh_runner.go:235] Completed: systemctl --version: (2.0160608s)
	I0401 11:12:23.162913   13928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.0639645s)
	W0401 11:12:23.162913   13928 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0401 11:12:23.162913   13928 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0401 11:12:23.162913   13928 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0401 11:12:23.175996   13928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 11:12:23.184884   13928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:12:23.197334   13928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:12:23.215553   13928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 11:12:23.215605   13928 start.go:494] detecting cgroup driver to use...
	I0401 11:12:23.215905   13928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:12:23.265017   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:12:23.295678   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:12:23.315671   13928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:12:23.327504   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:12:23.360952   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:12:23.394039   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:12:23.426344   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:12:23.456513   13928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:12:23.488100   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:12:23.519746   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:12:23.553091   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:12:23.592313   13928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:12:23.623585   13928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:12:23.653587   13928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:12:23.876161   13928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:12:23.912674   13928 start.go:494] detecting cgroup driver to use...
	I0401 11:12:23.925853   13928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:12:23.965699   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:12:24.001877   13928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:12:24.047136   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:12:24.087468   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:12:24.112075   13928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:12:24.157975   13928 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:12:24.177102   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:12:24.195928   13928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:12:24.240126   13928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:12:24.471761   13928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:12:24.701177   13928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:12:24.701468   13928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:12:24.749291   13928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:12:24.966635   13928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:13:49.885829   13928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m24.9186004s)
	I0401 11:13:49.899749   13928 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 11:13:49.985780   13928 out.go:177] 
	W0401 11:13:49.990428   13928 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 01 10:45:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:45:43 functional-706500 dockerd[5287]: time="2024-04-01T10:45:43.246929152Z" level=info msg="Starting up"
	Apr 01 10:46:43 functional-706500 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:46:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 01 10:46:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:46:43 functional-706500 dockerd[5501]: time="2024-04-01T10:46:43.470357918Z" level=info msg="Starting up"
	Apr 01 10:47:43 functional-706500 dockerd[5501]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:47:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 01 10:47:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:47:43 functional-706500 dockerd[5669]: time="2024-04-01T10:47:43.721563433Z" level=info msg="Starting up"
	Apr 01 10:48:43 functional-706500 dockerd[5669]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:48:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 01 10:48:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:48:43 functional-706500 dockerd[5837]: time="2024-04-01T10:48:43.970136068Z" level=info msg="Starting up"
	Apr 01 10:49:43 functional-706500 dockerd[5837]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:49:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 01 10:49:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:49:44 functional-706500 dockerd[6096]: time="2024-04-01T10:49:44.210275023Z" level=info msg="Starting up"
	Apr 01 10:50:44 functional-706500 dockerd[6096]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:50:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Apr 01 10:50:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:50:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:50:44 functional-706500 dockerd[6262]: time="2024-04-01T10:50:44.445595351Z" level=info msg="Starting up"
	Apr 01 10:51:44 functional-706500 dockerd[6262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:51:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Apr 01 10:51:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:51:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:51:44 functional-706500 dockerd[6427]: time="2024-04-01T10:51:44.713240118Z" level=info msg="Starting up"
	Apr 01 10:52:44 functional-706500 dockerd[6427]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:52:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Apr 01 10:52:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:52:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:52:44 functional-706500 dockerd[6633]: time="2024-04-01T10:52:44.963460969Z" level=info msg="Starting up"
	Apr 01 10:53:44 functional-706500 dockerd[6633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:53:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:53:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Apr 01 10:53:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:53:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:53:45 functional-706500 dockerd[6798]: time="2024-04-01T10:53:45.219771451Z" level=info msg="Starting up"
	Apr 01 10:54:45 functional-706500 dockerd[6798]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:54:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Apr 01 10:54:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:54:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:54:45 functional-706500 dockerd[6965]: time="2024-04-01T10:54:45.476248045Z" level=info msg="Starting up"
	Apr 01 10:55:45 functional-706500 dockerd[6965]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:55:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Apr 01 10:55:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:55:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:55:45 functional-706500 dockerd[7131]: time="2024-04-01T10:55:45.712231475Z" level=info msg="Starting up"
	Apr 01 10:56:45 functional-706500 dockerd[7131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:56:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Apr 01 10:56:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:56:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:56:45 functional-706500 dockerd[7303]: time="2024-04-01T10:56:45.972411662Z" level=info msg="Starting up"
	Apr 01 10:57:45 functional-706500 dockerd[7303]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:57:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:57:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Apr 01 10:57:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:57:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:57:46 functional-706500 dockerd[7458]: time="2024-04-01T10:57:46.228413043Z" level=info msg="Starting up"
	Apr 01 10:58:46 functional-706500 dockerd[7458]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:58:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 01 10:58:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:58:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:58:46 functional-706500 dockerd[7639]: time="2024-04-01T10:58:46.451545776Z" level=info msg="Starting up"
	Apr 01 10:59:46 functional-706500 dockerd[7639]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:59:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Apr 01 10:59:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:59:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:59:46 functional-706500 dockerd[7825]: time="2024-04-01T10:59:46.724851379Z" level=info msg="Starting up"
	Apr 01 11:00:46 functional-706500 dockerd[7825]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:00:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Apr 01 11:00:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:00:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:00:46 functional-706500 dockerd[8019]: time="2024-04-01T11:00:46.990871590Z" level=info msg="Starting up"
	Apr 01 11:01:47 functional-706500 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:01:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Apr 01 11:01:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:01:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:01:47 functional-706500 dockerd[8194]: time="2024-04-01T11:01:47.240553866Z" level=info msg="Starting up"
	Apr 01 11:02:47 functional-706500 dockerd[8194]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:02:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Apr 01 11:02:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:02:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:02:47 functional-706500 dockerd[8429]: time="2024-04-01T11:02:47.458291117Z" level=info msg="Starting up"
	Apr 01 11:03:47 functional-706500 dockerd[8429]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:03:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Apr 01 11:03:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:03:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:03:47 functional-706500 dockerd[8600]: time="2024-04-01T11:03:47.690450635Z" level=info msg="Starting up"
	Apr 01 11:04:47 functional-706500 dockerd[8600]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:04:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 01 11:04:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:04:47 functional-706500 dockerd[8762]: time="2024-04-01T11:04:47.974793898Z" level=info msg="Starting up"
	Apr 01 11:05:48 functional-706500 dockerd[8762]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:05:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 01 11:05:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:05:48 functional-706500 dockerd[9019]: time="2024-04-01T11:05:48.218482190Z" level=info msg="Starting up"
	Apr 01 11:06:48 functional-706500 dockerd[9019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:06:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 01 11:06:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:06:48 functional-706500 dockerd[9189]: time="2024-04-01T11:06:48.467778205Z" level=info msg="Starting up"
	Apr 01 11:07:48 functional-706500 dockerd[9189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:07:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 01 11:07:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:07:48 functional-706500 dockerd[9356]: time="2024-04-01T11:07:48.729681187Z" level=info msg="Starting up"
	Apr 01 11:08:48 functional-706500 dockerd[9356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:08:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 01 11:08:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:08:48 functional-706500 dockerd[9619]: time="2024-04-01T11:08:48.961882907Z" level=info msg="Starting up"
	Apr 01 11:09:48 functional-706500 dockerd[9619]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:09:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 01 11:09:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:09:49 functional-706500 dockerd[9784]: time="2024-04-01T11:09:49.212023599Z" level=info msg="Starting up"
	Apr 01 11:10:49 functional-706500 dockerd[9784]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:10:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Apr 01 11:10:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:10:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:10:49 functional-706500 dockerd[9947]: time="2024-04-01T11:10:49.471879307Z" level=info msg="Starting up"
	Apr 01 11:11:49 functional-706500 dockerd[9947]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:11:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
	Apr 01 11:11:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:11:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:11:49 functional-706500 dockerd[10267]: time="2024-04-01T11:11:49.718265627Z" level=info msg="Starting up"
	Apr 01 11:12:24 functional-706500 dockerd[10267]: time="2024-04-01T11:12:24.990418752Z" level=info msg="Processing signal 'terminated'"
	Apr 01 11:12:49 functional-706500 dockerd[10267]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:12:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:12:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:12:49 functional-706500 dockerd[10662]: time="2024-04-01T11:12:49.836764290Z" level=info msg="Starting up"
	Apr 01 11:13:49 functional-706500 dockerd[10662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:13:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 11:13:49.991137   13928 out.go:239] * 
	W0401 11:13:49.992691   13928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 11:13:49.996825   13928 out.go:177] 
	
	
	==> Docker <==
	Apr 01 11:15:50 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:15:50 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:15:50 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:15:50Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:15:50 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 01 11:15:50 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:15:50 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:15:50 functional-706500 dockerd[11221]: time="2024-04-01T11:15:50.735998270Z" level=info msg="Starting up"
	Apr 01 11:16:50 functional-706500 dockerd[11221]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:16:50 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:16:50Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:16:50 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:16:50 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:16:50 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:16:50 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 01 11:16:50 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:16:50 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:16:50 functional-706500 dockerd[11487]: time="2024-04-01T11:16:50.971408107Z" level=info msg="Starting up"
	Apr 01 11:17:50 functional-706500 dockerd[11487]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:17:50 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:17:50Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Apr 01 11:17:50 functional-706500 cri-dockerd[1235]: time="2024-04-01T11:17:50Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:17:50 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:17:50 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:17:50 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:17:51 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 01 11:17:51 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:17:51 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-01T11:17:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.243279] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.862068] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.224563] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.212159] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.303367] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.779527] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.125264] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.334425] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.690157] systemd-fstab-generator[1533]: Ignoring "noauto" option for root device
	[  +7.791368] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.115789] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.847693] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.168038] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.042400] systemd-fstab-generator[3395]: Ignoring "noauto" option for root device
	[  +0.238049] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 10:43] kauditd_printk_skb: 65 callbacks suppressed
	[Apr 1 10:44] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.654601] systemd-fstab-generator[4704]: Ignoring "noauto" option for root device
	[  +0.265712] systemd-fstab-generator[4719]: Ignoring "noauto" option for root device
	[  +0.315556] systemd-fstab-generator[4734]: Ignoring "noauto" option for root device
	[  +5.367808] kauditd_printk_skb: 89 callbacks suppressed
	[Apr 1 11:12] systemd-fstab-generator[10536]: Ignoring "noauto" option for root device
	[  +0.604437] systemd-fstab-generator[10572]: Ignoring "noauto" option for root device
	[  +0.234871] systemd-fstab-generator[10584]: Ignoring "noauto" option for root device
	[  +0.260517] systemd-fstab-generator[10598]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 11:18:51 up 38 min,  0 users,  load average: 0.03, 0.04, 0.04
	Linux functional-706500 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 11:18:46 functional-706500 kubelet[2870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:18:46 functional-706500 kubelet[2870]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:18:46 functional-706500 kubelet[2870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:18:46 functional-706500 kubelet[2870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:18:47 functional-706500 kubelet[2870]: E0401 11:18:47.671537    2870 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-706500.17c22214ff7a8a99\": dial tcp 172.19.145.71:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-706500.17c22214ff7a8a99  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-706500,UID:9df4090af0216fb714c930802ab28762,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.19.145.71:8441/readyz\": dial tcp 172.19.145.71:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-706500,},FirstTimestamp:2024-04-01 10:44:41.935121049 +0000 UTC m=+115.349073462,LastTimestam
p:2024-04-01 10:44:46.935416457 +0000 UTC m=+120.349368770,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-706500,}"
	Apr 01 11:18:49 functional-706500 kubelet[2870]: E0401 11:18:49.581220    2870 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 34m18.43098501s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243111    2870 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243157    2870 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243174    2870 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243215    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243253    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243396    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243471    2870 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243521    2870 kubelet.go:2902] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243604    2870 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.243624    2870 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: I0401 11:18:51.243637    2870 image_gc_manager.go:207] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.244341    2870 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.244414    2870 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: I0401 11:18:51.244428    2870 image_gc_manager.go:215] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.244457    2870 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.244493    2870 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.244608    2870 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.244625    2870 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 01 11:18:51 functional-706500 kubelet[2870]: E0401 11:18:51.244826    2870 kubelet.go:1433] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:16:26.594106    7292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 11:16:50.777874    7292 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:16:50.810745    7292 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:16:50.842626    7292 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:16:50.874426    7292 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:17:51.002199    7292 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:17:51.035234    7292 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:17:51.069648    7292 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0401 11:17:51.099645    7292 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-706500 -n functional-706500: exit status 2 (12.5312379s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:18:52.027120   13996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-706500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (180.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (28.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-706500 logs: exit status 1 (27.4312551s)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | -p download-only-373700                                                                     | download-only-373700 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-729600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | binary-mirror-729600                                                                        |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |                |                     |                     |
	|         | http://127.0.0.1:49987                                                                      |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | -p binary-mirror-729600                                                                     | binary-mirror-729600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| start   | -p addons-852800 --wait=true                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |                |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |                |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	|         | disable metrics-server                                                                      |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| ssh     | addons-852800 ssh cat                                                                       | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	|         | /opt/local-path-provisioner/pvc-772810e3-66c1-4b28-81a8-0348debb99f1_default_test-pvc/file1 |                      |                   |                |                     |                     |
	| ip      | addons-852800 ip                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |                |                     |                     |
	|         | -v=1                                                                                        |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:29 UTC |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:29 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:29 UTC |
	|         | -p addons-852800                                                                            |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:29 UTC |
	|         | disable volumesnapshots                                                                     |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:29 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |                |                     |                     |
	|         | -v=1                                                                                        |                      |                   |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:30 UTC |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:30 UTC |
	|         | -p addons-852800                                                                            |                      |                   |                |                     |                     |
	| ssh     | addons-852800 ssh curl -s                                                                   | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:30 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |                   |                |                     |                     |
	| ip      | addons-852800 ip                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |                   |                |                     |                     |
	|         | -v=1                                                                                        |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |                   |                |                     |                     |
	| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:31 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |                   |                |                     |                     |
	|         | -v=1                                                                                        |                      |                   |                |                     |                     |
	| stop    | -p addons-852800                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| addons  | disable gvisor -p                                                                           | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
	|         | addons-852800                                                                               |                      |                   |                |                     |                     |
	| delete  | -p addons-852800                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:33 UTC |
	| start   | -p nospam-189500 -n=1 --memory=2250 --wait=false                                            | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:33 UTC | 01 Apr 24 10:36 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                       |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |                |                     |                     |
	| start   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | pause                                                                                       |                      |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | pause                                                                                       |                      |                   |                |                     |                     |
	| pause   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | pause                                                                                       |                      |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | unpause                                                                                     |                      |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | unpause                                                                                     |                      |                   |                |                     |                     |
	| unpause | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | unpause                                                                                     |                      |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | stop                                                                                        |                      |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | stop                                                                                        |                      |                   |                |                     |                     |
	| stop    | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
	|         | stop                                                                                        |                      |                   |                |                     |                     |
	| delete  | -p nospam-189500                                                                            | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
	| start   | -p functional-706500                                                                        | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:43 UTC |
	|         | --memory=4000                                                                               |                      |                   |                |                     |                     |
	|         | --apiserver-port=8441                                                                       |                      |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                                                  |                      |                   |                |                     |                     |
	| start   | -p functional-706500                                                                        | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:43 UTC |                     |
	|         | --alsologtostderr -v=8                                                                      |                      |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                                                 | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:51 UTC | 01 Apr 24 10:53 UTC |
	|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                                                 | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:53 UTC | 01 Apr 24 10:55 UTC |
	|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                                                 | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:55 UTC | 01 Apr 24 10:57 UTC |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
	| cache   | functional-706500 cache add                                                                 | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:57 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                                                 |                      |                   |                |                     |                     |
	| cache   | functional-706500 cache delete                                                              | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | minikube-local-cache-test:functional-706500                                                 |                      |                   |                |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |                |                     |                     |
	| cache   | list                                                                                        | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
	| ssh     | functional-706500 ssh sudo                                                                  | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | crictl images                                                                               |                      |                   |                |                     |                     |
	| ssh     | functional-706500                                                                           | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
	|         | ssh sudo docker rmi                                                                         |                      |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
	| ssh     | functional-706500 ssh                                                                       | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC |                     |
	|         | sudo crictl inspecti                                                                        |                      |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
	| cache   | functional-706500 cache reload                                                              | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC | 01 Apr 24 11:01 UTC |
	| ssh     | functional-706500 ssh                                                                       | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC |                     |
	|         | sudo crictl inspecti                                                                        |                      |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |                |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
	| kubectl | functional-706500 kubectl --                                                                | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:05 UTC |                     |
	|         | --context functional-706500                                                                 |                      |                   |                |                     |                     |
	|         | get pods                                                                                    |                      |                   |                |                     |                     |
	| start   | -p functional-706500                                                                        | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:11 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |                |                     |                     |
	|         | --wait=all                                                                                  |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 11:11:02
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 11:11:02.675408   13928 out.go:291] Setting OutFile to fd 864 ...
	I0401 11:11:02.677219   13928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:11:02.677219   13928 out.go:304] Setting ErrFile to fd 1008...
	I0401 11:11:02.677219   13928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:11:02.702122   13928 out.go:298] Setting JSON to false
	I0401 11:11:02.706199   13928 start.go:129] hostinfo: {"hostname":"minikube6","uptime":312621,"bootTime":1711657241,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 11:11:02.706199   13928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 11:11:02.714712   13928 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 11:11:02.718618   13928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:11:02.718618   13928 notify.go:220] Checking for updates...
	I0401 11:11:02.720771   13928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 11:11:02.723643   13928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 11:11:02.727188   13928 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 11:11:02.728921   13928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 11:11:02.731796   13928 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:11:02.732867   13928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 11:11:08.299905   13928 out.go:177] * Using the hyperv driver based on existing profile
	I0401 11:11:08.301941   13928 start.go:297] selected driver: hyperv
	I0401 11:11:08.301941   13928 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:11:08.301941   13928 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 11:11:08.352980   13928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:11:08.352980   13928 cni.go:84] Creating CNI manager for ""
	I0401 11:11:08.352980   13928 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 11:11:08.352980   13928 start.go:340] cluster config:
	{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:11:08.353717   13928 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 11:11:08.358695   13928 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
	I0401 11:11:08.360937   13928 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:11:08.360937   13928 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 11:11:08.360937   13928 cache.go:56] Caching tarball of preloaded images
	I0401 11:11:08.361555   13928 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:11:08.361555   13928 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:11:08.361555   13928 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
	I0401 11:11:08.364025   13928 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:11:08.364637   13928 start.go:364] duration metric: took 612.4µs to acquireMachinesLock for "functional-706500"
	I0401 11:11:08.364637   13928 start.go:96] Skipping create...Using existing machine configuration
	I0401 11:11:08.364637   13928 fix.go:54] fixHost starting: 
	I0401 11:11:08.365259   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:11.219191   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:11.219191   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:11.219191   13928 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
	W0401 11:11:11.219191   13928 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 11:11:11.224961   13928 out.go:177] * Updating the running hyperv "functional-706500" VM ...
	I0401 11:11:11.227211   13928 machine.go:94] provisionDockerMachine start ...
	I0401 11:11:11.227211   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:13.436978   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:13.437051   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:13.437051   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:16.016234   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:16.016234   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:16.022101   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:16.022735   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:16.022735   13928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:11:16.166363   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 11:11:16.166363   13928 buildroot.go:166] provisioning hostname "functional-706500"
	I0401 11:11:16.166363   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:18.351617   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:18.351861   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:18.351939   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:20.983006   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:20.983006   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:20.988905   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:20.989438   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:20.989438   13928 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
	I0401 11:11:21.150541   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500
	
	I0401 11:11:21.150541   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:23.340502   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:23.340882   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:23.340882   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:26.010381   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:26.010381   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:26.017280   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:26.017280   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:26.017280   13928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:11:26.148356   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:11:26.148356   13928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:11:26.148425   13928 buildroot.go:174] setting up certificates
	I0401 11:11:26.148425   13928 provision.go:84] configureAuth start
	I0401 11:11:26.148425   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:28.313151   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:28.313151   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:28.313690   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:30.949501   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:30.949501   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:30.949682   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:33.152310   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:33.152310   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:33.153208   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:35.840975   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:35.841190   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:35.841190   13928 provision.go:143] copyHostCerts
	I0401 11:11:35.841696   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:11:35.841696   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:11:35.842172   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:11:35.843643   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:11:35.843643   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:11:35.843959   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:11:35.845117   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:11:35.845117   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:11:35.845219   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:11:35.846465   13928 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
	I0401 11:11:36.004672   13928 provision.go:177] copyRemoteCerts
	I0401 11:11:36.018395   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:11:36.018615   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:38.234069   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:40.901918   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:40.901918   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:40.902935   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:11:41.015623   13928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9971931s)
	I0401 11:11:41.015623   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 11:11:41.066242   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:11:41.112581   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 11:11:41.163579   13928 provision.go:87] duration metric: took 15.0150492s to configureAuth
	I0401 11:11:41.163579   13928 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:11:41.164283   13928 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:11:41.164366   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:43.344617   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:45.977441   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:45.977441   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:45.982713   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:45.983473   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:45.983473   13928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:11:46.126333   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:11:46.126333   13928 buildroot.go:70] root file system type: tmpfs
	I0401 11:11:46.126632   13928 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:11:46.126702   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:48.312395   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:50.968210   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:50.968210   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:50.975527   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:50.976107   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:50.976324   13928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:11:51.143276   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:11:51.143355   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:53.313983   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:53.313983   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:53.315008   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:11:55.944506   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:11:55.944506   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:55.952199   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:11:55.952984   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:11:55.952984   13928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:11:56.098851   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:11:56.098851   13928 machine.go:97] duration metric: took 44.8713258s to provisionDockerMachine
	I0401 11:11:56.098912   13928 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
	I0401 11:11:56.098912   13928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:11:56.112299   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:11:56.112299   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:11:58.314497   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:11:58.314497   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:11:58.314550   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:01.020981   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:01.021995   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:01.022193   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:01.123158   13928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.010824s)
	I0401 11:12:01.135833   13928 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:12:01.144329   13928 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:12:01.144329   13928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:12:01.145605   13928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:12:01.147606   13928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:12:01.148343   13928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
	I0401 11:12:01.161761   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
	I0401 11:12:01.191883   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:12:01.242289   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
	I0401 11:12:01.293425   13928 start.go:296] duration metric: took 5.1944768s for postStartSetup
	I0401 11:12:01.293551   13928 fix.go:56] duration metric: took 52.928543s for fixHost
	I0401 11:12:01.293609   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:03.483354   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:06.113661   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:06.113926   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:06.119841   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:12:06.120607   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:12:06.120607   13928 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:12:06.256997   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711969926.251527594
	
	I0401 11:12:06.256997   13928 fix.go:216] guest clock: 1711969926.251527594
	I0401 11:12:06.256997   13928 fix.go:229] Guest: 2024-04-01 11:12:06.251527594 +0000 UTC Remote: 2024-04-01 11:12:01.2935512 +0000 UTC m=+58.789370401 (delta=4.957976394s)
	I0401 11:12:06.257089   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:08.474846   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:08.474908   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:08.474908   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:11.120345   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:11.120345   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:11.130054   13928 main.go:141] libmachine: Using SSH client type: native
	I0401 11:12:11.130207   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
	I0401 11:12:11.130207   13928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711969926
	I0401 11:12:11.274198   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:12:06 UTC 2024
	
	I0401 11:12:11.274198   13928 fix.go:236] clock set: Mon Apr  1 11:12:06 UTC 2024
	 (err=<nil>)
	I0401 11:12:11.274198   13928 start.go:83] releasing machines lock for "functional-706500", held for 1m2.90912s
	I0401 11:12:11.274396   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:13.445515   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:16.095069   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:16.095069   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:16.098899   13928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:12:16.099055   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:16.110253   13928 ssh_runner.go:195] Run: cat /version.json
	I0401 11:12:16.110253   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
	I0401 11:12:18.356456   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:18.357479   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:18.357605   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:18.357665   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:12:21.024542   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:21.024944   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:21.025655   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:21.064561   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71
	
	I0401 11:12:21.065133   13928 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:12:21.065201   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
	I0401 11:12:21.132409   13928 ssh_runner.go:235] Completed: cat /version.json: (5.0219453s)
	I0401 11:12:21.146776   13928 ssh_runner.go:195] Run: systemctl --version
	I0401 11:12:23.162851   13928 ssh_runner.go:235] Completed: systemctl --version: (2.0160608s)
	I0401 11:12:23.162913   13928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.0639645s)
	W0401 11:12:23.162913   13928 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0401 11:12:23.162913   13928 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0401 11:12:23.162913   13928 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0401 11:12:23.175996   13928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 11:12:23.184884   13928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:12:23.197334   13928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:12:23.215553   13928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 11:12:23.215605   13928 start.go:494] detecting cgroup driver to use...
	I0401 11:12:23.215905   13928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:12:23.265017   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:12:23.295678   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:12:23.315671   13928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:12:23.327504   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:12:23.360952   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:12:23.394039   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:12:23.426344   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:12:23.456513   13928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:12:23.488100   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:12:23.519746   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:12:23.553091   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:12:23.592313   13928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:12:23.623585   13928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:12:23.653587   13928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:12:23.876161   13928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:12:23.912674   13928 start.go:494] detecting cgroup driver to use...
	I0401 11:12:23.925853   13928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:12:23.965699   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:12:24.001877   13928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:12:24.047136   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:12:24.087468   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:12:24.112075   13928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:12:24.157975   13928 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:12:24.177102   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:12:24.195928   13928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:12:24.240126   13928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:12:24.471761   13928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:12:24.701177   13928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:12:24.701468   13928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:12:24.749291   13928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:12:24.966635   13928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:13:49.885829   13928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m24.9186004s)
	I0401 11:13:49.899749   13928 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 11:13:49.985780   13928 out.go:177] 
	W0401 11:13:49.990428   13928 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
	Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
	Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
	Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
	Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
	Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
	Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 01 10:45:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:45:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:45:43 functional-706500 dockerd[5287]: time="2024-04-01T10:45:43.246929152Z" level=info msg="Starting up"
	Apr 01 10:46:43 functional-706500 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:46:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 01 10:46:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:46:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:46:43 functional-706500 dockerd[5501]: time="2024-04-01T10:46:43.470357918Z" level=info msg="Starting up"
	Apr 01 10:47:43 functional-706500 dockerd[5501]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:47:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 01 10:47:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:47:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:47:43 functional-706500 dockerd[5669]: time="2024-04-01T10:47:43.721563433Z" level=info msg="Starting up"
	Apr 01 10:48:43 functional-706500 dockerd[5669]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:48:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 01 10:48:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:48:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:48:43 functional-706500 dockerd[5837]: time="2024-04-01T10:48:43.970136068Z" level=info msg="Starting up"
	Apr 01 10:49:43 functional-706500 dockerd[5837]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:49:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 01 10:49:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:49:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:49:44 functional-706500 dockerd[6096]: time="2024-04-01T10:49:44.210275023Z" level=info msg="Starting up"
	Apr 01 10:50:44 functional-706500 dockerd[6096]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:50:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Apr 01 10:50:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:50:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:50:44 functional-706500 dockerd[6262]: time="2024-04-01T10:50:44.445595351Z" level=info msg="Starting up"
	Apr 01 10:51:44 functional-706500 dockerd[6262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:51:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Apr 01 10:51:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:51:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:51:44 functional-706500 dockerd[6427]: time="2024-04-01T10:51:44.713240118Z" level=info msg="Starting up"
	Apr 01 10:52:44 functional-706500 dockerd[6427]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:52:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Apr 01 10:52:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:52:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:52:44 functional-706500 dockerd[6633]: time="2024-04-01T10:52:44.963460969Z" level=info msg="Starting up"
	Apr 01 10:53:44 functional-706500 dockerd[6633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:53:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:53:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Apr 01 10:53:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:53:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:53:45 functional-706500 dockerd[6798]: time="2024-04-01T10:53:45.219771451Z" level=info msg="Starting up"
	Apr 01 10:54:45 functional-706500 dockerd[6798]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:54:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Apr 01 10:54:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:54:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:54:45 functional-706500 dockerd[6965]: time="2024-04-01T10:54:45.476248045Z" level=info msg="Starting up"
	Apr 01 10:55:45 functional-706500 dockerd[6965]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:55:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Apr 01 10:55:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:55:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:55:45 functional-706500 dockerd[7131]: time="2024-04-01T10:55:45.712231475Z" level=info msg="Starting up"
	Apr 01 10:56:45 functional-706500 dockerd[7131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:56:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Apr 01 10:56:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:56:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:56:45 functional-706500 dockerd[7303]: time="2024-04-01T10:56:45.972411662Z" level=info msg="Starting up"
	Apr 01 10:57:45 functional-706500 dockerd[7303]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:57:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:57:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Apr 01 10:57:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:57:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:57:46 functional-706500 dockerd[7458]: time="2024-04-01T10:57:46.228413043Z" level=info msg="Starting up"
	Apr 01 10:58:46 functional-706500 dockerd[7458]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:58:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 01 10:58:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:58:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:58:46 functional-706500 dockerd[7639]: time="2024-04-01T10:58:46.451545776Z" level=info msg="Starting up"
	Apr 01 10:59:46 functional-706500 dockerd[7639]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 10:59:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Apr 01 10:59:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 10:59:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 10:59:46 functional-706500 dockerd[7825]: time="2024-04-01T10:59:46.724851379Z" level=info msg="Starting up"
	Apr 01 11:00:46 functional-706500 dockerd[7825]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:00:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Apr 01 11:00:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:00:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:00:46 functional-706500 dockerd[8019]: time="2024-04-01T11:00:46.990871590Z" level=info msg="Starting up"
	Apr 01 11:01:47 functional-706500 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:01:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Apr 01 11:01:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:01:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:01:47 functional-706500 dockerd[8194]: time="2024-04-01T11:01:47.240553866Z" level=info msg="Starting up"
	Apr 01 11:02:47 functional-706500 dockerd[8194]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:02:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Apr 01 11:02:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:02:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:02:47 functional-706500 dockerd[8429]: time="2024-04-01T11:02:47.458291117Z" level=info msg="Starting up"
	Apr 01 11:03:47 functional-706500 dockerd[8429]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:03:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Apr 01 11:03:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:03:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:03:47 functional-706500 dockerd[8600]: time="2024-04-01T11:03:47.690450635Z" level=info msg="Starting up"
	Apr 01 11:04:47 functional-706500 dockerd[8600]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:04:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 01 11:04:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:04:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:04:47 functional-706500 dockerd[8762]: time="2024-04-01T11:04:47.974793898Z" level=info msg="Starting up"
	Apr 01 11:05:48 functional-706500 dockerd[8762]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:05:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 01 11:05:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:05:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:05:48 functional-706500 dockerd[9019]: time="2024-04-01T11:05:48.218482190Z" level=info msg="Starting up"
	Apr 01 11:06:48 functional-706500 dockerd[9019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:06:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 01 11:06:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:06:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:06:48 functional-706500 dockerd[9189]: time="2024-04-01T11:06:48.467778205Z" level=info msg="Starting up"
	Apr 01 11:07:48 functional-706500 dockerd[9189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:07:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 01 11:07:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:07:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:07:48 functional-706500 dockerd[9356]: time="2024-04-01T11:07:48.729681187Z" level=info msg="Starting up"
	Apr 01 11:08:48 functional-706500 dockerd[9356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:08:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 01 11:08:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:08:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:08:48 functional-706500 dockerd[9619]: time="2024-04-01T11:08:48.961882907Z" level=info msg="Starting up"
	Apr 01 11:09:48 functional-706500 dockerd[9619]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:09:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 01 11:09:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:09:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:09:49 functional-706500 dockerd[9784]: time="2024-04-01T11:09:49.212023599Z" level=info msg="Starting up"
	Apr 01 11:10:49 functional-706500 dockerd[9784]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:10:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Apr 01 11:10:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:10:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:10:49 functional-706500 dockerd[9947]: time="2024-04-01T11:10:49.471879307Z" level=info msg="Starting up"
	Apr 01 11:11:49 functional-706500 dockerd[9947]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:11:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
	Apr 01 11:11:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:11:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:11:49 functional-706500 dockerd[10267]: time="2024-04-01T11:11:49.718265627Z" level=info msg="Starting up"
	Apr 01 11:12:24 functional-706500 dockerd[10267]: time="2024-04-01T11:12:24.990418752Z" level=info msg="Processing signal 'terminated'"
	Apr 01 11:12:49 functional-706500 dockerd[10267]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:12:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 11:12:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 11:12:49 functional-706500 dockerd[10662]: time="2024-04-01T11:12:49.836764290Z" level=info msg="Starting up"
	Apr 01 11:13:49 functional-706500 dockerd[10662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 11:13:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 11:13:49.991137   13928 out.go:239] * 
	W0401 11:13:49.992691   13928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 11:13:49.996825   13928 out.go:177] 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:19:04.556868    4860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1234: out/minikube-windows-amd64.exe -p functional-706500 logs failed: exit status 1
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
| Command |                                            Args                                             |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
| delete  | -p download-only-373700                                                                     | download-only-373700 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
| start   | --download-only -p                                                                          | binary-mirror-729600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
|         | binary-mirror-729600                                                                        |                      |                   |                |                     |                     |
|         | --alsologtostderr                                                                           |                      |                   |                |                     |                     |
|         | --binary-mirror                                                                             |                      |                   |                |                     |                     |
|         | http://127.0.0.1:49987                                                                      |                      |                   |                |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
| delete  | -p binary-mirror-729600                                                                     | binary-mirror-729600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
| addons  | enable dashboard -p                                                                         | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
|         | addons-852800                                                                               |                      |                   |                |                     |                     |
| addons  | disable dashboard -p                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
|         | addons-852800                                                                               |                      |                   |                |                     |                     |
| start   | -p addons-852800 --wait=true                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:28 UTC |
|         | --memory=4000 --alsologtostderr                                                             |                      |                   |                |                     |                     |
|         | --addons=registry                                                                           |                      |                   |                |                     |                     |
|         | --addons=metrics-server                                                                     |                      |                   |                |                     |                     |
|         | --addons=volumesnapshots                                                                    |                      |                   |                |                     |                     |
|         | --addons=csi-hostpath-driver                                                                |                      |                   |                |                     |                     |
|         | --addons=gcp-auth                                                                           |                      |                   |                |                     |                     |
|         | --addons=cloud-spanner                                                                      |                      |                   |                |                     |                     |
|         | --addons=inspektor-gadget                                                                   |                      |                   |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                                        |                      |                   |                |                     |                     |
|         | --addons=nvidia-device-plugin                                                               |                      |                   |                |                     |                     |
|         | --addons=yakd --driver=hyperv                                                               |                      |                   |                |                     |                     |
|         | --addons=ingress                                                                            |                      |                   |                |                     |                     |
|         | --addons=ingress-dns                                                                        |                      |                   |                |                     |                     |
|         | --addons=helm-tiller                                                                        |                      |                   |                |                     |                     |
| addons  | addons-852800 addons                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
|         | disable metrics-server                                                                      |                      |                   |                |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
| ssh     | addons-852800 ssh cat                                                                       | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
|         | /opt/local-path-provisioner/pvc-772810e3-66c1-4b28-81a8-0348debb99f1_default_test-pvc/file1 |                      |                   |                |                     |                     |
| ip      | addons-852800 ip                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
|         | registry --alsologtostderr                                                                  |                      |                   |                |                     |                     |
|         | -v=1                                                                                        |                      |                   |                |                     |                     |
| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:28 UTC |
|         | storage-provisioner-rancher                                                                 |                      |                   |                |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
| addons  | disable cloud-spanner -p                                                                    | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:29 UTC |
|         | addons-852800                                                                               |                      |                   |                |                     |                     |
| addons  | addons-852800 addons                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:29 UTC |
|         | disable csi-hostpath-driver                                                                 |                      |                   |                |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
| addons  | enable headlamp                                                                             | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:28 UTC | 01 Apr 24 10:29 UTC |
|         | -p addons-852800                                                                            |                      |                   |                |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
| addons  | addons-852800 addons                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:29 UTC |
|         | disable volumesnapshots                                                                     |                      |                   |                |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:29 UTC |
|         | helm-tiller --alsologtostderr                                                               |                      |                   |                |                     |                     |
|         | -v=1                                                                                        |                      |                   |                |                     |                     |
| addons  | disable inspektor-gadget -p                                                                 | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:30 UTC |
|         | addons-852800                                                                               |                      |                   |                |                     |                     |
| addons  | disable nvidia-device-plugin                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:30 UTC |
|         | -p addons-852800                                                                            |                      |                   |                |                     |                     |
| ssh     | addons-852800 ssh curl -s                                                                   | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:30 UTC |
|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |                |                     |                     |
|         | nginx.example.com'                                                                          |                      |                   |                |                     |                     |
| ip      | addons-852800 ip                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
|         | ingress-dns --alsologtostderr                                                               |                      |                   |                |                     |                     |
|         | -v=1                                                                                        |                      |                   |                |                     |                     |
| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
|         | ingress --alsologtostderr -v=1                                                              |                      |                   |                |                     |                     |
| addons  | addons-852800 addons disable                                                                | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:31 UTC |
|         | gcp-auth --alsologtostderr                                                                  |                      |                   |                |                     |                     |
|         | -v=1                                                                                        |                      |                   |                |                     |                     |
| stop    | -p addons-852800                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:32 UTC |
| addons  | enable dashboard -p                                                                         | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
|         | addons-852800                                                                               |                      |                   |                |                     |                     |
| addons  | disable dashboard -p                                                                        | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
|         | addons-852800                                                                               |                      |                   |                |                     |                     |
| addons  | disable gvisor -p                                                                           | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:32 UTC |
|         | addons-852800                                                                               |                      |                   |                |                     |                     |
| delete  | -p addons-852800                                                                            | addons-852800        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:32 UTC | 01 Apr 24 10:33 UTC |
| start   | -p nospam-189500 -n=1 --memory=2250 --wait=false                                            | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:33 UTC | 01 Apr 24 10:36 UTC |
|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                       |                      |                   |                |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
| start   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | start --dry-run                                                                             |                      |                   |                |                     |                     |
| start   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | start --dry-run                                                                             |                      |                   |                |                     |                     |
| start   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:36 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | start --dry-run                                                                             |                      |                   |                |                     |                     |
| pause   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | pause                                                                                       |                      |                   |                |                     |                     |
| pause   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | pause                                                                                       |                      |                   |                |                     |                     |
| pause   | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | pause                                                                                       |                      |                   |                |                     |                     |
| unpause | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:37 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | unpause                                                                                     |                      |                   |                |                     |                     |
| unpause | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:37 UTC | 01 Apr 24 10:38 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | unpause                                                                                     |                      |                   |                |                     |                     |
| unpause | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | unpause                                                                                     |                      |                   |                |                     |                     |
| stop    | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:38 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | stop                                                                                        |                      |                   |                |                     |                     |
| stop    | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:38 UTC | 01 Apr 24 10:39 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | stop                                                                                        |                      |                   |                |                     |                     |
| stop    | nospam-189500 --log_dir                                                                     | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500                                 |                      |                   |                |                     |                     |
|         | stop                                                                                        |                      |                   |                |                     |                     |
| delete  | -p nospam-189500                                                                            | nospam-189500        | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:39 UTC |
| start   | -p functional-706500                                                                        | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:39 UTC | 01 Apr 24 10:43 UTC |
|         | --memory=4000                                                                               |                      |                   |                |                     |                     |
|         | --apiserver-port=8441                                                                       |                      |                   |                |                     |                     |
|         | --wait=all --driver=hyperv                                                                  |                      |                   |                |                     |                     |
| start   | -p functional-706500                                                                        | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:43 UTC |                     |
|         | --alsologtostderr -v=8                                                                      |                      |                   |                |                     |                     |
| cache   | functional-706500 cache add                                                                 | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:51 UTC | 01 Apr 24 10:53 UTC |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |                |                     |                     |
| cache   | functional-706500 cache add                                                                 | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:53 UTC | 01 Apr 24 10:55 UTC |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |                |                     |                     |
| cache   | functional-706500 cache add                                                                 | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:55 UTC | 01 Apr 24 10:57 UTC |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
| cache   | functional-706500 cache add                                                                 | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:57 UTC | 01 Apr 24 10:58 UTC |
|         | minikube-local-cache-test:functional-706500                                                 |                      |                   |                |                     |                     |
| cache   | functional-706500 cache delete                                                              | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
|         | minikube-local-cache-test:functional-706500                                                 |                      |                   |                |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |                |                     |                     |
| cache   | list                                                                                        | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC | 01 Apr 24 10:58 UTC |
| ssh     | functional-706500 ssh sudo                                                                  | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
|         | crictl images                                                                               |                      |                   |                |                     |                     |
| ssh     | functional-706500                                                                           | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:58 UTC |                     |
|         | ssh sudo docker rmi                                                                         |                      |                   |                |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
| ssh     | functional-706500 ssh                                                                       | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |                |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
| cache   | functional-706500 cache reload                                                              | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:59 UTC | 01 Apr 24 11:01 UTC |
| ssh     | functional-706500 ssh                                                                       | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |                |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |                |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |                |                     |                     |
| kubectl | functional-706500 kubectl --                                                                | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:05 UTC |                     |
|         | --context functional-706500                                                                 |                      |                   |                |                     |                     |
|         | get pods                                                                                    |                      |                   |                |                     |                     |
| start   | -p functional-706500                                                                        | functional-706500    | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:11 UTC |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |                |                     |                     |
|         | --wait=all                                                                                  |                      |                   |                |                     |                     |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/01 11:11:02
Running on machine: minikube6
Binary: Built with gc go1.22.1 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0401 11:11:02.675408   13928 out.go:291] Setting OutFile to fd 864 ...
I0401 11:11:02.677219   13928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 11:11:02.677219   13928 out.go:304] Setting ErrFile to fd 1008...
I0401 11:11:02.677219   13928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 11:11:02.702122   13928 out.go:298] Setting JSON to false
I0401 11:11:02.706199   13928 start.go:129] hostinfo: {"hostname":"minikube6","uptime":312621,"bootTime":1711657241,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
W0401 11:11:02.706199   13928 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0401 11:11:02.714712   13928 out.go:177] * [functional-706500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
I0401 11:11:02.718618   13928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
I0401 11:11:02.718618   13928 notify.go:220] Checking for updates...
I0401 11:11:02.720771   13928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0401 11:11:02.723643   13928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
I0401 11:11:02.727188   13928 out.go:177]   - MINIKUBE_LOCATION=18551
I0401 11:11:02.728921   13928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0401 11:11:02.731796   13928 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0401 11:11:02.732867   13928 driver.go:392] Setting default libvirt URI to qemu:///system
I0401 11:11:08.299905   13928 out.go:177] * Using the hyperv driver based on existing profile
I0401 11:11:08.301941   13928 start.go:297] selected driver: hyperv
I0401 11:11:08.301941   13928 start.go:901] validating driver "hyperv" against &{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0401 11:11:08.301941   13928 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0401 11:11:08.352980   13928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0401 11:11:08.352980   13928 cni.go:84] Creating CNI manager for ""
I0401 11:11:08.352980   13928 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0401 11:11:08.352980   13928 start.go:340] cluster config:
{Name:functional-706500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-706500 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.145.71 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0401 11:11:08.353717   13928 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 11:11:08.358695   13928 out.go:177] * Starting "functional-706500" primary control-plane node in "functional-706500" cluster
I0401 11:11:08.360937   13928 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0401 11:11:08.360937   13928 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
I0401 11:11:08.360937   13928 cache.go:56] Caching tarball of preloaded images
I0401 11:11:08.361555   13928 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0401 11:11:08.361555   13928 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0401 11:11:08.361555   13928 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-706500\config.json ...
I0401 11:11:08.364025   13928 start.go:360] acquireMachinesLock for functional-706500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0401 11:11:08.364637   13928 start.go:364] duration metric: took 612.4µs to acquireMachinesLock for "functional-706500"
I0401 11:11:08.364637   13928 start.go:96] Skipping create...Using existing machine configuration
I0401 11:11:08.364637   13928 fix.go:54] fixHost starting: 
I0401 11:11:08.365259   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:11.219191   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:11.219191   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:11.219191   13928 fix.go:112] recreateIfNeeded on functional-706500: state=Running err=<nil>
W0401 11:11:11.219191   13928 fix.go:138] unexpected machine state, will restart: <nil>
I0401 11:11:11.224961   13928 out.go:177] * Updating the running hyperv "functional-706500" VM ...
I0401 11:11:11.227211   13928 machine.go:94] provisionDockerMachine start ...
I0401 11:11:11.227211   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:13.436978   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:13.437051   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:13.437051   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:16.016234   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:16.016234   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:16.022101   13928 main.go:141] libmachine: Using SSH client type: native
I0401 11:11:16.022735   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
I0401 11:11:16.022735   13928 main.go:141] libmachine: About to run SSH command:
hostname
I0401 11:11:16.166363   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500

                                                
                                                
I0401 11:11:16.166363   13928 buildroot.go:166] provisioning hostname "functional-706500"
I0401 11:11:16.166363   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:18.351617   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:18.351861   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:18.351939   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:20.983006   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:20.983006   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:20.988905   13928 main.go:141] libmachine: Using SSH client type: native
I0401 11:11:20.989438   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
I0401 11:11:20.989438   13928 main.go:141] libmachine: About to run SSH command:
sudo hostname functional-706500 && echo "functional-706500" | sudo tee /etc/hostname
I0401 11:11:21.150541   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-706500

                                                
                                                
I0401 11:11:21.150541   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:23.340502   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:23.340882   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:23.340882   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:26.010381   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:26.010381   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:26.017280   13928 main.go:141] libmachine: Using SSH client type: native
I0401 11:11:26.017280   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
I0401 11:11:26.017280   13928 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sfunctional-706500' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-706500/g' /etc/hosts;
			else 
				echo '127.0.1.1 functional-706500' | sudo tee -a /etc/hosts; 
			fi
		fi
I0401 11:11:26.148356   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0401 11:11:26.148356   13928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
I0401 11:11:26.148425   13928 buildroot.go:174] setting up certificates
I0401 11:11:26.148425   13928 provision.go:84] configureAuth start
I0401 11:11:26.148425   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:28.313151   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:28.313151   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:28.313690   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:30.949501   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:30.949501   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:30.949682   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:33.152310   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:33.152310   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:33.153208   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:35.840975   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:35.841190   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:35.841190   13928 provision.go:143] copyHostCerts
I0401 11:11:35.841696   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
I0401 11:11:35.841696   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
I0401 11:11:35.842172   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
I0401 11:11:35.843643   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
I0401 11:11:35.843643   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
I0401 11:11:35.843959   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
I0401 11:11:35.845117   13928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
I0401 11:11:35.845117   13928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
I0401 11:11:35.845219   13928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
I0401 11:11:35.846465   13928 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-706500 san=[127.0.0.1 172.19.145.71 functional-706500 localhost minikube]
I0401 11:11:36.004672   13928 provision.go:177] copyRemoteCerts
I0401 11:11:36.018395   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0401 11:11:36.018615   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:38.234069   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:38.234069   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:38.234069   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:40.901918   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:40.901918   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:40.902935   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
I0401 11:11:41.015623   13928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9971931s)
I0401 11:11:41.015623   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0401 11:11:41.066242   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0401 11:11:41.112581   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
I0401 11:11:41.163579   13928 provision.go:87] duration metric: took 15.0150492s to configureAuth
I0401 11:11:41.163579   13928 buildroot.go:189] setting minikube options for container-runtime
I0401 11:11:41.164283   13928 config.go:182] Loaded profile config "functional-706500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0401 11:11:41.164366   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:43.344617   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:43.344617   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:43.344617   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:45.977441   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:45.977441   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:45.982713   13928 main.go:141] libmachine: Using SSH client type: native
I0401 11:11:45.983473   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
I0401 11:11:45.983473   13928 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0401 11:11:46.126333   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0401 11:11:46.126333   13928 buildroot.go:70] root file system type: tmpfs
I0401 11:11:46.126632   13928 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0401 11:11:46.126702   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:48.312395   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:48.312395   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:48.312395   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:50.968210   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:50.968210   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:50.975527   13928 main.go:141] libmachine: Using SSH client type: native
I0401 11:11:50.976107   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
I0401 11:11:50.976324   13928 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0401 11:11:51.143276   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0401 11:11:51.143355   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:53.313983   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:53.313983   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:53.315008   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:11:55.944506   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:11:55.944506   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:55.952199   13928 main.go:141] libmachine: Using SSH client type: native
I0401 11:11:55.952984   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
I0401 11:11:55.952984   13928 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0401 11:11:56.098851   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0401 11:11:56.098851   13928 machine.go:97] duration metric: took 44.8713258s to provisionDockerMachine
I0401 11:11:56.098912   13928 start.go:293] postStartSetup for "functional-706500" (driver="hyperv")
I0401 11:11:56.098912   13928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0401 11:11:56.112299   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0401 11:11:56.112299   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:11:58.314497   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:11:58.314497   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:11:58.314550   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:12:01.020981   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:12:01.021995   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:01.022193   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
I0401 11:12:01.123158   13928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.010824s)
I0401 11:12:01.135833   13928 ssh_runner.go:195] Run: cat /etc/os-release
I0401 11:12:01.144329   13928 info.go:137] Remote host: Buildroot 2023.02.9
I0401 11:12:01.144329   13928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
I0401 11:12:01.145605   13928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
I0401 11:12:01.147606   13928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
I0401 11:12:01.148343   13928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts -> hosts in /etc/test/nested/copy/1260
I0401 11:12:01.161761   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1260
I0401 11:12:01.191883   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
I0401 11:12:01.242289   13928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts --> /etc/test/nested/copy/1260/hosts (40 bytes)
I0401 11:12:01.293425   13928 start.go:296] duration metric: took 5.1944768s for postStartSetup
I0401 11:12:01.293551   13928 fix.go:56] duration metric: took 52.928543s for fixHost
I0401 11:12:01.293609   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:12:03.483354   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:12:03.483354   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:03.483354   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:12:06.113661   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:12:06.113926   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:06.119841   13928 main.go:141] libmachine: Using SSH client type: native
I0401 11:12:06.120607   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
I0401 11:12:06.120607   13928 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0401 11:12:06.256997   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711969926.251527594

                                                
                                                
I0401 11:12:06.256997   13928 fix.go:216] guest clock: 1711969926.251527594
I0401 11:12:06.256997   13928 fix.go:229] Guest: 2024-04-01 11:12:06.251527594 +0000 UTC Remote: 2024-04-01 11:12:01.2935512 +0000 UTC m=+58.789370401 (delta=4.957976394s)
I0401 11:12:06.257089   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:12:08.474846   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:12:08.474908   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:08.474908   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:12:11.120345   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:12:11.120345   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:11.130054   13928 main.go:141] libmachine: Using SSH client type: native
I0401 11:12:11.130207   13928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.71 22 <nil> <nil>}
I0401 11:12:11.130207   13928 main.go:141] libmachine: About to run SSH command:
sudo date -s @1711969926
I0401 11:12:11.274198   13928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:12:06 UTC 2024

                                                
                                                
I0401 11:12:11.274198   13928 fix.go:236] clock set: Mon Apr  1 11:12:06 UTC 2024
(err=<nil>)
I0401 11:12:11.274198   13928 start.go:83] releasing machines lock for "functional-706500", held for 1m2.90912s
I0401 11:12:11.274396   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:12:13.445515   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:12:13.445515   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:13.445515   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:12:16.095069   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:12:16.095069   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:16.098899   13928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0401 11:12:16.099055   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:12:16.110253   13928 ssh_runner.go:195] Run: cat /version.json
I0401 11:12:16.110253   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-706500 ).state
I0401 11:12:18.356456   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:12:18.357479   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:18.357605   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:12:18.357665   13928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0401 11:12:18.357665   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:18.357665   13928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-706500 ).networkadapters[0]).ipaddresses[0]
I0401 11:12:21.024542   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:12:21.024944   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:21.025655   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
I0401 11:12:21.064561   13928 main.go:141] libmachine: [stdout =====>] : 172.19.145.71

                                                
                                                
I0401 11:12:21.065133   13928 main.go:141] libmachine: [stderr =====>] : 
I0401 11:12:21.065201   13928 sshutil.go:53] new ssh client: &{IP:172.19.145.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-706500\id_rsa Username:docker}
I0401 11:12:21.132409   13928 ssh_runner.go:235] Completed: cat /version.json: (5.0219453s)
I0401 11:12:21.146776   13928 ssh_runner.go:195] Run: systemctl --version
I0401 11:12:23.162851   13928 ssh_runner.go:235] Completed: systemctl --version: (2.0160608s)
I0401 11:12:23.162913   13928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.0639645s)
W0401 11:12:23.162913   13928 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
stdout:

                                                
                                                
stderr:
curl: (28) Resolving timed out after 2001 milliseconds
W0401 11:12:23.162913   13928 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
W0401 11:12:23.162913   13928 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0401 11:12:23.175996   13928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0401 11:12:23.184884   13928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0401 11:12:23.197334   13928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0401 11:12:23.215553   13928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0401 11:12:23.215605   13928 start.go:494] detecting cgroup driver to use...
I0401 11:12:23.215905   13928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0401 11:12:23.265017   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0401 11:12:23.295678   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0401 11:12:23.315671   13928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0401 11:12:23.327504   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0401 11:12:23.360952   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0401 11:12:23.394039   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0401 11:12:23.426344   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0401 11:12:23.456513   13928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0401 11:12:23.488100   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0401 11:12:23.519746   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0401 11:12:23.553091   13928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0401 11:12:23.592313   13928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0401 11:12:23.623585   13928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0401 11:12:23.653587   13928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0401 11:12:23.876161   13928 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0401 11:12:23.912674   13928 start.go:494] detecting cgroup driver to use...
I0401 11:12:23.925853   13928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0401 11:12:23.965699   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0401 11:12:24.001877   13928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0401 11:12:24.047136   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0401 11:12:24.087468   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0401 11:12:24.112075   13928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0401 11:12:24.157975   13928 ssh_runner.go:195] Run: which cri-dockerd
I0401 11:12:24.177102   13928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0401 11:12:24.195928   13928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0401 11:12:24.240126   13928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0401 11:12:24.471761   13928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0401 11:12:24.701177   13928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0401 11:12:24.701468   13928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0401 11:12:24.749291   13928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0401 11:12:24.966635   13928 ssh_runner.go:195] Run: sudo systemctl restart docker
I0401 11:13:49.885829   13928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m24.9186004s)
I0401 11:13:49.899749   13928 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0401 11:13:49.985780   13928 out.go:177] 
W0401 11:13:49.990428   13928 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 01 10:41:36 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.798187907Z" level=info msg="Starting up"
Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.799265393Z" level=info msg="containerd not running, starting managed containerd"
Apr 01 10:41:36 functional-706500 dockerd[664]: time="2024-04-01T10:41:36.801153967Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.834503116Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864798705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.864898504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865059802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865113001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865212000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865256999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865499096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865555295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865576395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865609294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.865753793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.867014575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870257832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870310031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870444729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870462329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870596027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870743325Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.870834824Z" level=info msg="metadata content store policy set" policy=shared
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897462063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.897805459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898135354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898171054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898190053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898313952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.898989643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899167740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899272439Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899301338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899318538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899336538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899352338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899368237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899383937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899398337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899418637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899434837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899462836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899479636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899498336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899514336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899529235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899556535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899570935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899585235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899614334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899636734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899649934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899665933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899687733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899701533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899713833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899763132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899805432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899821631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.899834531Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900053728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900097828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900112627Z" level=info msg="NRI interface is disabled by configuration."
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900346624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900522822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900606221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 01 10:41:36 functional-706500 dockerd[670]: time="2024-04-01T10:41:36.900628520Z" level=info msg="containerd successfully booted in 0.067896s"
Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.881266690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 01 10:41:37 functional-706500 dockerd[664]: time="2024-04-01T10:41:37.913749578Z" level=info msg="Loading containers: start."
Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.190567623Z" level=info msg="Loading containers: done."
Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210532331Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.210792325Z" level=info msg="Daemon has completed initialization"
Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.322256037Z" level=info msg="API listen on /var/run/docker.sock"
Apr 01 10:41:38 functional-706500 systemd[1]: Started Docker Application Container Engine.
Apr 01 10:41:38 functional-706500 dockerd[664]: time="2024-04-01T10:41:38.324372495Z" level=info msg="API listen on [::]:2376"
Apr 01 10:42:10 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.255417869Z" level=info msg="Processing signal 'terminated'"
Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257242855Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257742051Z" level=info msg="Daemon shutdown complete"
Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257819351Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 01 10:42:10 functional-706500 dockerd[664]: time="2024-04-01T10:42:10.257833051Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 01 10:42:11 functional-706500 systemd[1]: docker.service: Deactivated successfully.
Apr 01 10:42:11 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:42:11 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.342920152Z" level=info msg="Starting up"
Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.343996643Z" level=info msg="containerd not running, starting managed containerd"
Apr 01 10:42:11 functional-706500 dockerd[1025]: time="2024-04-01T10:42:11.350366294Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.379880466Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407703150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407800050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407838949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407854149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407878249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.407890849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408011548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408108047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408128347Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408139547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408169347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.408321546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411838418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.411935718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412067417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412160216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412200215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412272815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412357314Z" level=info msg="metadata content store policy set" policy=shared
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412750611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412805011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412832511Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412847710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412861010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.412910010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413179008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413301707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413393506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413413606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413426506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413439206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413452806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413465706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413515705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413534505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413546905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413563605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413582305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413595805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413608005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413620404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413632004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413643604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413700804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413723304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413737904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413760303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413786603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413799903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413814703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413834603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413864903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.413876203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414040001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414189400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414205900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414216900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414274199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414309299Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414322199Z" level=info msg="NRI interface is disabled by configuration."
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414595197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414761096Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414885695Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 01 10:42:11 functional-706500 dockerd[1032]: time="2024-04-01T10:42:11.414906595Z" level=info msg="containerd successfully booted in 0.036445s"
Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.397553189Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.424109483Z" level=info msg="Loading containers: start."
Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.620687361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.700390644Z" level=info msg="Loading containers: done."
Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723684464Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.723808263Z" level=info msg="Daemon has completed initialization"
Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774193973Z" level=info msg="API listen on /var/run/docker.sock"
Apr 01 10:42:12 functional-706500 dockerd[1025]: time="2024-04-01T10:42:12.774790569Z" level=info msg="API listen on [::]:2376"
Apr 01 10:42:12 functional-706500 systemd[1]: Started Docker Application Container Engine.
Apr 01 10:42:22 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.638112423Z" level=info msg="Processing signal 'terminated'"
Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.639327614Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640716203Z" level=info msg="Daemon shutdown complete"
Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640868202Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 01 10:42:22 functional-706500 dockerd[1025]: time="2024-04-01T10:42:22.640898802Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 01 10:42:23 functional-706500 systemd[1]: docker.service: Deactivated successfully.
Apr 01 10:42:23 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:42:23 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.723910919Z" level=info msg="Starting up"
Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.725000810Z" level=info msg="containerd not running, starting managed containerd"
Apr 01 10:42:23 functional-706500 dockerd[1345]: time="2024-04-01T10:42:23.726014302Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1352
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.759076547Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788713617Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788946115Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.788990715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789007215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789034015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789047115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789266413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789288013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789301913Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789316412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789341712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.789459111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792587087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792687186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792828085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792943184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.792972984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793010384Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793022684Z" level=info msg="metadata content store policy set" policy=shared
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793413281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793582679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793606179Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793621379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793635979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793684079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.793941477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794104875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794125675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794158275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794173275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794203775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794335474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794357273Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794373173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794402573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794415973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794428773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794454973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794474373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794552872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794565572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794579272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794625371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794639371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794652571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794667671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794702971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794716771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794730071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794746570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794772670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794786170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794799470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794849070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794867070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794879269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794909469Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794977269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.794994569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795005768Z" level=info msg="NRI interface is disabled by configuration."
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795369966Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795590564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795736263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 01 10:42:23 functional-706500 dockerd[1352]: time="2024-04-01T10:42:23.795759763Z" level=info msg="containerd successfully booted in 0.037893s"
Apr 01 10:42:25 functional-706500 dockerd[1345]: time="2024-04-01T10:42:25.032578289Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 01 10:42:27 functional-706500 dockerd[1345]: time="2024-04-01T10:42:27.902821873Z" level=info msg="Loading containers: start."
Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.089242230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.170001204Z" level=info msg="Loading containers: done."
Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201080364Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.201145363Z" level=info msg="Daemon has completed initialization"
Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252237768Z" level=info msg="API listen on /var/run/docker.sock"
Apr 01 10:42:28 functional-706500 dockerd[1345]: time="2024-04-01T10:42:28.252653065Z" level=info msg="API listen on [::]:2376"
Apr 01 10:42:28 functional-706500 systemd[1]: Started Docker Application Container Engine.
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188810301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188880103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188893204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.188986106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222294330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222419134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222438534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.222693442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339412529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.339736439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340151651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.340705268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348745916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348853319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.348991523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671543735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671626637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671702839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.671911146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740384550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.740599157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.741320879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.744106464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911666513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911763116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911782517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.911912721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922191337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922411843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922453845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:42:38 functional-706500 dockerd[1352]: time="2024-04-01T10:42:38.922615750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298727909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298894809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.298999610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.299329810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.618827208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.619017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620163211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.620641611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984015178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984326179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984356179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:00 functional-706500 dockerd[1352]: time="2024-04-01T10:43:00.984644079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.570306286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571192465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571318363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:01 functional-706500 dockerd[1352]: time="2024-04-01T10:43:01.571741053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300449471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300562468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300578368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.300681565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.634619601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636208556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636344553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:43:08 functional-706500 dockerd[1352]: time="2024-04-01T10:43:08.636540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.625210127Z" level=info msg="Processing signal 'terminated'"
Apr 01 10:44:31 functional-706500 systemd[1]: Stopping Docker Application Container Engine...
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767505499Z" level=info msg="shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767645695Z" level=warning msg="cleaning up after shim disconnected" id=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.767660294Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.768509967Z" level=info msg="ignoring event" container=c666d739e17dc62034ac144c0d11fbb28189824bf32ae69224fa59b645f71ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.825505453Z" level=info msg="ignoring event" container=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.827805780Z" level=info msg="shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828118170Z" level=warning msg="cleaning up after shim disconnected" id=143d56456f5f030226ca3eaf7194c09acdbe8f84183b68e23f320c061c9a493b namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.828152169Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.861149719Z" level=info msg="ignoring event" container=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862059790Z" level=info msg="shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862305182Z" level=warning msg="cleaning up after shim disconnected" id=df3e478ce9accc918aff3245b91c22e49b81322de468fa48130d395b51bc9f89 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.862589373Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.874617090Z" level=info msg="ignoring event" container=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.875146374Z" level=info msg="shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876041645Z" level=warning msg="cleaning up after shim disconnected" id=9a700e55062f8400efc22f04f088c00e9674ba962c1972851a17e2daeaf1ece1 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.876268938Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.899616495Z" level=info msg="shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.899944984Z" level=info msg="ignoring event" container=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900397770Z" level=info msg="ignoring event" container=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.900783058Z" level=info msg="ignoring event" container=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902355908Z" level=warning msg="cleaning up after shim disconnected" id=79e18d35d245306884f733d70e8cfc9c0a85dc3b9dba409f0a50e6dfda5eb587 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.902444205Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913209762Z" level=info msg="ignoring event" container=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.913305259Z" level=info msg="ignoring event" container=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900168877Z" level=info msg="shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.900293873Z" level=info msg="shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916277065Z" level=warning msg="cleaning up after shim disconnected" id=3db01d14b814ef61205f26f84b2bdfa6d44d87ac3ffb52c04715d4f287a98100 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.916379561Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.918951280Z" level=info msg="shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919013978Z" level=warning msg="cleaning up after shim disconnected" id=b871790f052a532974c5bd9d1fe6f5f1a56642de9b257857f89c97638a55fa08 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.919043477Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926677934Z" level=warning msg="cleaning up after shim disconnected" id=72af670e788b85aee75f994bbb88680a3200d0f12dc04c1c830a761b1b2d2630 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.926803730Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930165323Z" level=info msg="shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930420415Z" level=warning msg="cleaning up after shim disconnected" id=82960f93044861d48a65d30c2ee568f8a1713c7ab9ab0b32dfc5107f6ef941cf namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.930547411Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957149064Z" level=info msg="shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957361857Z" level=warning msg="cleaning up after shim disconnected" id=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.957684447Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.965337403Z" level=info msg="shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969190881Z" level=warning msg="cleaning up after shim disconnected" id=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969305877Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972086089Z" level=info msg="ignoring event" container=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972241984Z" level=info msg="ignoring event" container=f287754f5c0f40c0f9bd15197c3820e5abfac5477d8f1a59a09deaef1be1dfd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1345]: time="2024-04-01T10:44:31.972407078Z" level=info msg="ignoring event" container=8b9c896e6cafc7922726d539a7030a551682ad8a130a643e19285efd5df18ad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.969165282Z" level=info msg="shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974766703Z" level=warning msg="cleaning up after shim disconnected" id=fe3710e9327ed55226982934dafa59a0e87da8f452e52133927388e55bcd35c2 namespace=moby
Apr 01 10:44:31 functional-706500 dockerd[1352]: time="2024-04-01T10:44:31.974984096Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:32 functional-706500 dockerd[1352]: time="2024-04-01T10:44:32.043930902Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:44:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.776471695Z" level=info msg="shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777041777Z" level=warning msg="cleaning up after shim disconnected" id=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 namespace=moby
Apr 01 10:44:36 functional-706500 dockerd[1352]: time="2024-04-01T10:44:36.777308068Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:36 functional-706500 dockerd[1345]: time="2024-04-01T10:44:36.781075248Z" level=info msg="ignoring event" container=d5329d710848f374748f1591f15d5b5372daddfc425bd2fac1c0944e66a60ba4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.810274179Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96
Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.868790416Z" level=info msg="ignoring event" container=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.869660949Z" level=info msg="shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870570373Z" level=warning msg="cleaning up after shim disconnected" id=d78489859036f97cfbc07ce8a63e8b90f1dd1830b4b8e9ff327bcd1af59a6c96 namespace=moby
Apr 01 10:44:41 functional-706500 dockerd[1352]: time="2024-04-01T10:44:41.870583871Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.931309183Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932335386Z" level=info msg="Daemon shutdown complete"
Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932413970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 01 10:44:41 functional-706500 dockerd[1345]: time="2024-04-01T10:44:41.932442665Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Deactivated successfully.
Apr 01 10:44:42 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:44:42 functional-706500 systemd[1]: docker.service: Consumed 7.717s CPU time.
Apr 01 10:44:42 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:44:43 functional-706500 dockerd[5139]: time="2024-04-01T10:44:43.012129916Z" level=info msg="Starting up"
Apr 01 10:45:43 functional-706500 dockerd[5139]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:45:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:45:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Apr 01 10:45:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:45:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:45:43 functional-706500 dockerd[5287]: time="2024-04-01T10:45:43.246929152Z" level=info msg="Starting up"
Apr 01 10:46:43 functional-706500 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:46:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:46:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Apr 01 10:46:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:46:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:46:43 functional-706500 dockerd[5501]: time="2024-04-01T10:46:43.470357918Z" level=info msg="Starting up"
Apr 01 10:47:43 functional-706500 dockerd[5501]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:47:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:47:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Apr 01 10:47:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:47:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:47:43 functional-706500 dockerd[5669]: time="2024-04-01T10:47:43.721563433Z" level=info msg="Starting up"
Apr 01 10:48:43 functional-706500 dockerd[5669]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:48:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:48:43 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
Apr 01 10:48:43 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:48:43 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:48:43 functional-706500 dockerd[5837]: time="2024-04-01T10:48:43.970136068Z" level=info msg="Starting up"
Apr 01 10:49:43 functional-706500 dockerd[5837]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:49:43 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:49:43 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:49:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
Apr 01 10:49:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:49:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:49:44 functional-706500 dockerd[6096]: time="2024-04-01T10:49:44.210275023Z" level=info msg="Starting up"
Apr 01 10:50:44 functional-706500 dockerd[6096]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:50:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:50:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
Apr 01 10:50:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:50:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:50:44 functional-706500 dockerd[6262]: time="2024-04-01T10:50:44.445595351Z" level=info msg="Starting up"
Apr 01 10:51:44 functional-706500 dockerd[6262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:51:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:51:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
Apr 01 10:51:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:51:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:51:44 functional-706500 dockerd[6427]: time="2024-04-01T10:51:44.713240118Z" level=info msg="Starting up"
Apr 01 10:52:44 functional-706500 dockerd[6427]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:52:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:52:44 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
Apr 01 10:52:44 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:52:44 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:52:44 functional-706500 dockerd[6633]: time="2024-04-01T10:52:44.963460969Z" level=info msg="Starting up"
Apr 01 10:53:44 functional-706500 dockerd[6633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:53:44 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:53:44 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:53:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
Apr 01 10:53:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:53:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:53:45 functional-706500 dockerd[6798]: time="2024-04-01T10:53:45.219771451Z" level=info msg="Starting up"
Apr 01 10:54:45 functional-706500 dockerd[6798]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:54:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:54:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
Apr 01 10:54:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:54:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:54:45 functional-706500 dockerd[6965]: time="2024-04-01T10:54:45.476248045Z" level=info msg="Starting up"
Apr 01 10:55:45 functional-706500 dockerd[6965]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:55:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:55:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
Apr 01 10:55:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:55:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:55:45 functional-706500 dockerd[7131]: time="2024-04-01T10:55:45.712231475Z" level=info msg="Starting up"
Apr 01 10:56:45 functional-706500 dockerd[7131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:56:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:56:45 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
Apr 01 10:56:45 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:56:45 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:56:45 functional-706500 dockerd[7303]: time="2024-04-01T10:56:45.972411662Z" level=info msg="Starting up"
Apr 01 10:57:45 functional-706500 dockerd[7303]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:57:45 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:57:45 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:57:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
Apr 01 10:57:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:57:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:57:46 functional-706500 dockerd[7458]: time="2024-04-01T10:57:46.228413043Z" level=info msg="Starting up"
Apr 01 10:58:46 functional-706500 dockerd[7458]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:58:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:58:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
Apr 01 10:58:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:58:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:58:46 functional-706500 dockerd[7639]: time="2024-04-01T10:58:46.451545776Z" level=info msg="Starting up"
Apr 01 10:59:46 functional-706500 dockerd[7639]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 10:59:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 10:59:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
Apr 01 10:59:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 10:59:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 10:59:46 functional-706500 dockerd[7825]: time="2024-04-01T10:59:46.724851379Z" level=info msg="Starting up"
Apr 01 11:00:46 functional-706500 dockerd[7825]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:00:46 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:00:46 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
Apr 01 11:00:46 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:00:46 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:00:46 functional-706500 dockerd[8019]: time="2024-04-01T11:00:46.990871590Z" level=info msg="Starting up"
Apr 01 11:01:47 functional-706500 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:01:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:01:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
Apr 01 11:01:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:01:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:01:47 functional-706500 dockerd[8194]: time="2024-04-01T11:01:47.240553866Z" level=info msg="Starting up"
Apr 01 11:02:47 functional-706500 dockerd[8194]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:02:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:02:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
Apr 01 11:02:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:02:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:02:47 functional-706500 dockerd[8429]: time="2024-04-01T11:02:47.458291117Z" level=info msg="Starting up"
Apr 01 11:03:47 functional-706500 dockerd[8429]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:03:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:03:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
Apr 01 11:03:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:03:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:03:47 functional-706500 dockerd[8600]: time="2024-04-01T11:03:47.690450635Z" level=info msg="Starting up"
Apr 01 11:04:47 functional-706500 dockerd[8600]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:04:47 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:04:47 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
Apr 01 11:04:47 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:04:47 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:04:47 functional-706500 dockerd[8762]: time="2024-04-01T11:04:47.974793898Z" level=info msg="Starting up"
Apr 01 11:05:48 functional-706500 dockerd[8762]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:05:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:05:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
Apr 01 11:05:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:05:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:05:48 functional-706500 dockerd[9019]: time="2024-04-01T11:05:48.218482190Z" level=info msg="Starting up"
Apr 01 11:06:48 functional-706500 dockerd[9019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:06:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:06:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
Apr 01 11:06:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:06:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:06:48 functional-706500 dockerd[9189]: time="2024-04-01T11:06:48.467778205Z" level=info msg="Starting up"
Apr 01 11:07:48 functional-706500 dockerd[9189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:07:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:07:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
Apr 01 11:07:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:07:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:07:48 functional-706500 dockerd[9356]: time="2024-04-01T11:07:48.729681187Z" level=info msg="Starting up"
Apr 01 11:08:48 functional-706500 dockerd[9356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:08:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:08:48 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
Apr 01 11:08:48 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:08:48 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:08:48 functional-706500 dockerd[9619]: time="2024-04-01T11:08:48.961882907Z" level=info msg="Starting up"
Apr 01 11:09:48 functional-706500 dockerd[9619]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:09:48 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:09:48 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:09:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
Apr 01 11:09:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:09:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:09:49 functional-706500 dockerd[9784]: time="2024-04-01T11:09:49.212023599Z" level=info msg="Starting up"
Apr 01 11:10:49 functional-706500 dockerd[9784]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:10:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:10:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
Apr 01 11:10:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:10:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:10:49 functional-706500 dockerd[9947]: time="2024-04-01T11:10:49.471879307Z" level=info msg="Starting up"
Apr 01 11:11:49 functional-706500 dockerd[9947]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:11:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.
Apr 01 11:11:49 functional-706500 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
Apr 01 11:11:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:11:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:11:49 functional-706500 dockerd[10267]: time="2024-04-01T11:11:49.718265627Z" level=info msg="Starting up"
Apr 01 11:12:24 functional-706500 dockerd[10267]: time="2024-04-01T11:12:24.990418752Z" level=info msg="Processing signal 'terminated'"
Apr 01 11:12:49 functional-706500 dockerd[10267]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:12:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:12:49 functional-706500 systemd[1]: Stopped Docker Application Container Engine.
Apr 01 11:12:49 functional-706500 systemd[1]: Starting Docker Application Container Engine...
Apr 01 11:12:49 functional-706500 dockerd[10662]: time="2024-04-01T11:12:49.836764290Z" level=info msg="Starting up"
Apr 01 11:13:49 functional-706500 dockerd[10662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 11:13:49 functional-706500 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 01 11:13:49 functional-706500 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0401 11:13:49.991137   13928 out.go:239] * 
W0401 11:13:49.992691   13928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0401 11:13:49.996825   13928 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (28.34s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:168: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (72.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-f5xk7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-f5xk7 -- sh -c "ping -c 1 172.19.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-f5xk7 -- sh -c "ping -c 1 172.19.144.1": exit status 1 (10.5590759s)

                                                
                                                
-- stdout --
	PING 172.19.144.1 (172.19.144.1): 56 data bytes
	
	--- 172.19.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:34:39.547113   13688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.144.1) from pod (busybox-7fdf7869d9-f5xk7): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-gr89z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-gr89z -- sh -c "ping -c 1 172.19.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-gr89z -- sh -c "ping -c 1 172.19.144.1": exit status 1 (10.579188s)

                                                
                                                
-- stdout --
	PING 172.19.144.1 (172.19.144.1): 56 data bytes
	
	--- 172.19.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:34:50.688093    8124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.144.1) from pod (busybox-7fdf7869d9-gr89z): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-q7xs6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-q7xs6 -- sh -c "ping -c 1 172.19.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-q7xs6 -- sh -c "ping -c 1 172.19.144.1": exit status 1 (10.5546123s)

                                                
                                                
-- stdout --
	PING 172.19.144.1 (172.19.144.1): 56 data bytes
	
	--- 172.19.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:35:01.850245    1636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.144.1) from pod (busybox-7fdf7869d9-q7xs6): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-401500 -n ha-401500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-401500 -n ha-401500: (13.0068393s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 logs -n 25: (9.6882231s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:01 UTC | 01 Apr 24 11:01 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |                |                     |                     |
	| kubectl | functional-706500 kubectl --                                             | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:05 UTC |                     |
	|         | --context functional-706500                                              |                   |                   |                |                     |                     |
	|         | get pods                                                                 |                   |                   |                |                     |                     |
	| start   | -p functional-706500                                                     | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:11 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |                |                     |                     |
	|         | --wait=all                                                               |                   |                   |                |                     |                     |
	| delete  | -p functional-706500                                                     | functional-706500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:19 UTC | 01 Apr 24 11:20 UTC |
	| start   | -p ha-401500 --wait=true                                                 | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:20 UTC | 01 Apr 24 11:33 UTC |
	|         | --memory=2200 --ha                                                       |                   |                   |                |                     |                     |
	|         | -v=7 --alsologtostderr                                                   |                   |                   |                |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- apply -f                                                 | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml                                       |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- rollout status                                           | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | deployment/busybox                                                       |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- get pods -o                                              | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | jsonpath='{.items[*].status.podIP}'                                      |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- get pods -o                                              | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                     |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-f5xk7 --                                              |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-gr89z --                                              |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-q7xs6 --                                              |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-f5xk7 --                                              |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-gr89z --                                              |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-q7xs6 --                                              |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-f5xk7 -- nslookup                                     |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-gr89z -- nslookup                                     |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-q7xs6 -- nslookup                                     |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- get pods -o                                              | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                     |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-f5xk7                                                 |                   |                   |                |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |                |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC |                     |
	|         | busybox-7fdf7869d9-f5xk7 -- sh                                           |                   |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.144.1                                                |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC | 01 Apr 24 11:34 UTC |
	|         | busybox-7fdf7869d9-gr89z                                                 |                   |                   |                |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |                |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:34 UTC |                     |
	|         | busybox-7fdf7869d9-gr89z -- sh                                           |                   |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.144.1                                                |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:35 UTC | 01 Apr 24 11:35 UTC |
	|         | busybox-7fdf7869d9-q7xs6                                                 |                   |                   |                |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |                |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |                |                     |                     |
	| kubectl | -p ha-401500 -- exec                                                     | ha-401500         | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:35 UTC |                     |
	|         | busybox-7fdf7869d9-q7xs6 -- sh                                           |                   |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.144.1                                                |                   |                   |                |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 11:20:09
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 11:20:09.958181   12872 out.go:291] Setting OutFile to fd 1008 ...
	I0401 11:20:09.958812   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:20:09.958812   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:20:09.958812   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:20:09.986028   12872 out.go:298] Setting JSON to false
	I0401 11:20:09.991015   12872 start.go:129] hostinfo: {"hostname":"minikube6","uptime":313168,"bootTime":1711657241,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 11:20:09.991015   12872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 11:20:09.995348   12872 out.go:177] * [ha-401500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 11:20:09.998749   12872 notify.go:220] Checking for updates...
	I0401 11:20:09.999725   12872 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:20:10.001767   12872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 11:20:10.003754   12872 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 11:20:10.006745   12872 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 11:20:10.008770   12872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 11:20:10.011758   12872 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 11:20:15.668057   12872 out.go:177] * Using the hyperv driver based on user configuration
	I0401 11:20:15.671592   12872 start.go:297] selected driver: hyperv
	I0401 11:20:15.671592   12872 start.go:901] validating driver "hyperv" against <nil>
	I0401 11:20:15.671592   12872 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 11:20:15.724851   12872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 11:20:15.726083   12872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:20:15.726259   12872 cni.go:84] Creating CNI manager for ""
	I0401 11:20:15.726259   12872 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0401 11:20:15.726259   12872 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 11:20:15.726259   12872 start.go:340] cluster config:
	{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:20:15.726259   12872 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 11:20:15.730654   12872 out.go:177] * Starting "ha-401500" primary control-plane node in "ha-401500" cluster
	I0401 11:20:15.732981   12872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:20:15.732981   12872 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 11:20:15.732981   12872 cache.go:56] Caching tarball of preloaded images
	I0401 11:20:15.733502   12872 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:20:15.733669   12872 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:20:15.734199   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:20:15.734428   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json: {Name:mkee2f372bb024ea4eb6a289a94c70141fb4b78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:15.735787   12872 start.go:360] acquireMachinesLock for ha-401500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:20:15.735787   12872 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-401500"
	I0401 11:20:15.735787   12872 start.go:93] Provisioning new machine with config: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:20:15.735787   12872 start.go:125] createHost starting for "" (driver="hyperv")
	I0401 11:20:15.740904   12872 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 11:20:15.741569   12872 start.go:159] libmachine.API.Create for "ha-401500" (driver="hyperv")
	I0401 11:20:15.741569   12872 client.go:168] LocalClient.Create starting
	I0401 11:20:15.741761   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 11:20:15.741761   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:20:15.742313   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:20:15.742444   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 11:20:15.742774   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:20:15.742820   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:20:15.742929   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 11:20:17.947596   12872 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 11:20:17.947596   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:17.948558   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 11:20:19.783159   12872 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 11:20:19.783159   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:19.783234   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:20:21.366910   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:20:21.367902   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:21.368117   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:20:25.159543   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:20:25.159603   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:25.161994   12872 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 11:20:25.689799   12872 main.go:141] libmachine: Creating SSH key...
	I0401 11:20:25.889733   12872 main.go:141] libmachine: Creating VM...
	I0401 11:20:25.890753   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:20:28.865647   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:20:28.866120   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:28.866120   12872 main.go:141] libmachine: Using switch "Default Switch"
	I0401 11:20:28.866120   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:20:30.786591   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:20:30.786715   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:30.786715   12872 main.go:141] libmachine: Creating VHD
	I0401 11:20:30.786804   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 11:20:34.649179   12872 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6418C629-8011-4DB6-A5A9-1C2F45A7C7FA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 11:20:34.649946   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:34.649946   12872 main.go:141] libmachine: Writing magic tar header
	I0401 11:20:34.650071   12872 main.go:141] libmachine: Writing SSH key tar header
	I0401 11:20:34.664142   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 11:20:38.034051   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:38.034051   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:38.034051   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\disk.vhd' -SizeBytes 20000MB
	I0401 11:20:40.771482   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:40.771665   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:40.771708   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-401500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0401 11:20:44.710669   12872 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-401500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 11:20:44.710669   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:44.710669   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-401500 -DynamicMemoryEnabled $false
	I0401 11:20:47.152774   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:47.153330   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:47.153330   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-401500 -Count 2
	I0401 11:20:49.427854   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:49.428044   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:49.428155   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-401500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\boot2docker.iso'
	I0401 11:20:52.196449   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:52.196613   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:52.196719   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-401500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\disk.vhd'
	I0401 11:20:54.959360   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:54.963381   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:54.963381   12872 main.go:141] libmachine: Starting VM...
	I0401 11:20:54.963445   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-401500
	I0401 11:20:58.184682   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:58.184682   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:58.184682   12872 main.go:141] libmachine: Waiting for host to start...
	I0401 11:20:58.184817   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:00.533680   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:00.533718   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:00.533846   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:03.217674   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:21:03.217674   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:04.228448   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:06.565338   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:06.565400   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:06.565400   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:09.240020   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:21:09.240020   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:10.244815   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:12.584660   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:12.585756   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:12.586036   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:15.210292   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:21:15.210292   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:16.211797   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:18.565314   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:18.565314   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:18.565420   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:21.222497   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:21:21.222497   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:22.233375   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:24.585674   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:24.585674   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:24.586316   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:27.654188   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:27.654188   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:27.654188   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:29.915773   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:29.915773   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:29.916055   12872 machine.go:94] provisionDockerMachine start ...
	I0401 11:21:29.916210   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:32.192000   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:32.192000   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:32.192000   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:34.971209   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:34.971537   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:34.977441   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:21:34.988195   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:21:34.988195   12872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:21:35.128782   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 11:21:35.128886   12872 buildroot.go:166] provisioning hostname "ha-401500"
	I0401 11:21:35.128990   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:37.435002   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:37.435830   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:37.435912   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:40.140343   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:40.141396   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:40.147044   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:21:40.147600   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:21:40.147744   12872 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401500 && echo "ha-401500" | sudo tee /etc/hostname
	I0401 11:21:40.304874   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401500
	
	I0401 11:21:40.304874   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:42.551086   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:42.551086   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:42.551086   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:45.245322   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:45.245630   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:45.251871   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:21:45.252528   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:21:45.252528   12872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:21:45.411665   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:21:45.411665   12872 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:21:45.411665   12872 buildroot.go:174] setting up certificates
	I0401 11:21:45.411665   12872 provision.go:84] configureAuth start
	I0401 11:21:45.411665   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:47.697645   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:47.697843   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:47.697941   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:50.426307   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:50.426866   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:50.426866   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:52.699545   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:52.699545   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:52.699625   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:55.463663   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:55.463663   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:55.464243   12872 provision.go:143] copyHostCerts
	I0401 11:21:55.464391   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 11:21:55.464416   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:21:55.464416   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:21:55.464976   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:21:55.466381   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 11:21:55.466616   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:21:55.466684   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:21:55.466929   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:21:55.467973   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 11:21:55.468222   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:21:55.468271   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:21:55.469302   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:21:55.470588   12872 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-401500 san=[127.0.0.1 172.19.153.73 ha-401500 localhost minikube]
	I0401 11:21:55.991291   12872 provision.go:177] copyRemoteCerts
	I0401 11:21:56.006086   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:21:56.006086   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:58.299042   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:58.299894   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:58.299981   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:01.018248   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:01.018687   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:01.018751   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:22:01.133762   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1275788s)
	I0401 11:22:01.133833   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 11:22:01.133833   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:22:01.181330   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 11:22:01.181330   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0401 11:22:01.241071   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 11:22:01.241550   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 11:22:01.303683   12872 provision.go:87] duration metric: took 15.8918637s to configureAuth
	I0401 11:22:01.303724   12872 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:22:01.304104   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:22:01.304104   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:03.575632   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:03.575632   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:03.575737   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:06.315013   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:06.315770   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:06.322465   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:06.322611   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:06.322611   12872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:22:06.453162   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:22:06.453162   12872 buildroot.go:70] root file system type: tmpfs
	I0401 11:22:06.453162   12872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:22:06.453711   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:08.715812   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:08.715812   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:08.716801   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:11.409628   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:11.409785   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:11.415242   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:11.415945   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:11.415945   12872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:22:11.579718   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:22:11.579831   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:13.842278   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:13.842367   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:13.842367   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:16.481195   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:16.481589   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:16.487110   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:16.487732   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:16.487732   12872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:22:18.653185   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 11:22:18.653185   12872 machine.go:97] duration metric: took 48.7367321s to provisionDockerMachine
	I0401 11:22:18.653185   12872 client.go:171] duration metric: took 2m2.9107434s to LocalClient.Create
	I0401 11:22:18.653185   12872 start.go:167] duration metric: took 2m2.9107434s to libmachine.API.Create "ha-401500"
	I0401 11:22:18.653185   12872 start.go:293] postStartSetup for "ha-401500" (driver="hyperv")
	I0401 11:22:18.653185   12872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:22:18.667493   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:22:18.667493   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:20.911183   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:20.911183   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:20.911416   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:23.591542   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:23.591542   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:23.592668   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:22:23.699451   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0319221s)
	I0401 11:22:23.714122   12872 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:22:23.722998   12872 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:22:23.722998   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:22:23.722998   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:22:23.724676   12872 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:22:23.724727   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 11:22:23.738491   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 11:22:23.759003   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:22:23.814524   12872 start.go:296] duration metric: took 5.1608412s for postStartSetup
	I0401 11:22:23.817147   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:26.064866   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:26.065075   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:26.065075   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:28.755698   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:28.755698   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:28.756594   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:22:28.759267   12872 start.go:128] duration metric: took 2m13.022536s to createHost
	I0401 11:22:28.759802   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:31.018851   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:31.018851   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:31.018851   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:33.745186   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:33.745186   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:33.752656   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:33.752916   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:33.752916   12872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:22:33.891104   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711970553.885159057
	
	I0401 11:22:33.891104   12872 fix.go:216] guest clock: 1711970553.885159057
	I0401 11:22:33.891210   12872 fix.go:229] Guest: 2024-04-01 11:22:33.885159057 +0000 UTC Remote: 2024-04-01 11:22:28.7592675 +0000 UTC m=+138.992378801 (delta=5.125891557s)
	I0401 11:22:33.891282   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:36.213569   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:36.213569   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:36.213831   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:38.921844   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:38.922359   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:38.929284   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:38.929723   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:38.930296   12872 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711970553
	I0401 11:22:39.083238   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:22:33 UTC 2024
	
	I0401 11:22:39.084235   12872 fix.go:236] clock set: Mon Apr  1 11:22:33 UTC 2024
	 (err=<nil>)
	I0401 11:22:39.084235   12872 start.go:83] releasing machines lock for "ha-401500", held for 2m23.3474302s
	I0401 11:22:39.084235   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:41.288362   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:41.288504   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:41.288504   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:43.986086   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:43.986086   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:43.990874   12872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:22:43.990952   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:44.000596   12872 ssh_runner.go:195] Run: cat /version.json
	I0401 11:22:44.000596   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:46.324096   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:46.324096   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:46.324096   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:46.324238   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:46.324238   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:46.324238   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:49.097385   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:49.097385   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:49.098697   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:22:49.124947   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:49.125085   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:49.125761   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:22:49.203929   12872 ssh_runner.go:235] Completed: cat /version.json: (5.2031665s)
	I0401 11:22:49.216735   12872 ssh_runner.go:195] Run: systemctl --version
	I0401 11:22:49.274646   12872 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2837343s)
	I0401 11:22:49.287726   12872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 11:22:49.296404   12872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:22:49.307339   12872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:22:49.335423   12872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 11:22:49.335423   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:22:49.335833   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:22:49.387348   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:22:49.417911   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:22:49.439303   12872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:22:49.451130   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:22:49.488652   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:22:49.520112   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:22:49.553622   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:22:49.586659   12872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:22:49.620768   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:22:49.654826   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:22:49.686665   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:22:49.719959   12872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:22:49.756599   12872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:22:49.787956   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:22:50.020647   12872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:22:50.058435   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:22:50.071339   12872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:22:50.112278   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:22:50.154969   12872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:22:50.197536   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:22:50.232430   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:22:50.272028   12872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 11:22:50.339524   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:22:50.365778   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:22:50.415749   12872 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:22:50.435470   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:22:50.454740   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:22:50.503850   12872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:22:50.709536   12872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:22:50.901353   12872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:22:50.901613   12872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:22:50.950825   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:22:51.154131   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:22:53.722203   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5680541s)
	I0401 11:22:53.734660   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 11:22:53.771790   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:22:53.809943   12872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 11:22:54.037543   12872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 11:22:54.263629   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:22:54.484522   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 11:22:54.533419   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:22:54.576449   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:22:54.808859   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 11:22:54.916240   12872 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 11:22:54.928400   12872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 11:22:54.937390   12872 start.go:562] Will wait 60s for crictl version
	I0401 11:22:54.947387   12872 ssh_runner.go:195] Run: which crictl
	I0401 11:22:54.966760   12872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 11:22:55.046335   12872 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 11:22:55.060081   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:22:55.106961   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:22:55.145223   12872 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 11:22:55.145460   12872 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 11:22:55.151570   12872 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 11:22:55.151570   12872 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 11:22:55.151570   12872 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 11:22:55.151570   12872 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 11:22:55.154233   12872 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 11:22:55.154233   12872 ip.go:210] interface addr: 172.19.144.1/20
	I0401 11:22:55.167241   12872 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 11:22:55.175484   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:22:55.211626   12872 kubeadm.go:877] updating cluster {Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 11:22:55.211626   12872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:22:55.220685   12872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 11:22:55.244065   12872 docker.go:685] Got preloaded images: 
	I0401 11:22:55.244065   12872 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0401 11:22:55.257240   12872 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 11:22:55.293428   12872 ssh_runner.go:195] Run: which lz4
	I0401 11:22:55.300447   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0401 11:22:55.312718   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 11:22:55.317554   12872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 11:22:55.317554   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0401 11:22:57.476787   12872 docker.go:649] duration metric: took 2.1762419s to copy over tarball
	I0401 11:22:57.491136   12872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 11:23:06.343668   12872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8523846s)
	I0401 11:23:06.343668   12872 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 11:23:06.415209   12872 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 11:23:06.435162   12872 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0401 11:23:06.481453   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:23:06.719963   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:23:09.939933   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2199472s)
	I0401 11:23:09.950773   12872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 11:23:09.976251   12872 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0401 11:23:09.976251   12872 cache_images.go:84] Images are preloaded, skipping loading
	I0401 11:23:09.976251   12872 kubeadm.go:928] updating node { 172.19.153.73 8443 v1.29.3 docker true true} ...
	I0401 11:23:09.976251   12872 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-401500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.153.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 11:23:09.986760   12872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0401 11:23:10.023706   12872 cni.go:84] Creating CNI manager for ""
	I0401 11:23:10.023748   12872 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 11:23:10.023807   12872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 11:23:10.023807   12872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.153.73 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-401500 NodeName:ha-401500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.153.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.153.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 11:23:10.023807   12872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.153.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-401500"
	  kubeletExtraArgs:
	    node-ip: 172.19.153.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.153.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 11:23:10.023807   12872 kube-vip.go:111] generating kube-vip config ...
	I0401 11:23:10.038487   12872 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 11:23:10.071576   12872 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 11:23:10.071762   12872 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0401 11:23:10.086461   12872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 11:23:10.105265   12872 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 11:23:10.122378   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0401 11:23:10.145492   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0401 11:23:10.180363   12872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 11:23:10.217361   12872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0401 11:23:10.255972   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0401 11:23:10.302184   12872 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0401 11:23:10.309618   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:23:10.345391   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:23:10.575861   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:23:10.608738   12872 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500 for IP: 172.19.153.73
	I0401 11:23:10.608738   12872 certs.go:194] generating shared ca certs ...
	I0401 11:23:10.608931   12872 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:10.609677   12872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 11:23:10.610122   12872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 11:23:10.610183   12872 certs.go:256] generating profile certs ...
	I0401 11:23:10.611054   12872 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key
	I0401 11:23:10.611264   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.crt with IP's: []
	I0401 11:23:10.999213   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.crt ...
	I0401 11:23:10.999213   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.crt: {Name:mk509712757761f333b5c32ef54f4a38ffc199ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.001205   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key ...
	I0401 11:23:11.001205   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key: {Name:mkd4e7cd761140dd8d5f554482c5b9785b00f60a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.002212   12872 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.ebe7f2ea
	I0401 11:23:11.002212   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.ebe7f2ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.153.73 172.19.159.254]
	I0401 11:23:11.325644   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.ebe7f2ea ...
	I0401 11:23:11.325644   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.ebe7f2ea: {Name:mk12b87cb53027b4d13055127261e3a8281b77e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.327101   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.ebe7f2ea ...
	I0401 11:23:11.327101   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.ebe7f2ea: {Name:mk4a8764380b69ad826c3ae1d1a5760b71241788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.328352   12872 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.ebe7f2ea -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt
	I0401 11:23:11.340431   12872 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.ebe7f2ea -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key
	I0401 11:23:11.341356   12872 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key
	I0401 11:23:11.341356   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt with IP's: []
	I0401 11:23:11.534308   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt ...
	I0401 11:23:11.534308   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt: {Name:mkd5c65cb2feb76384684744ec21e6f206c25eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.535368   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key ...
	I0401 11:23:11.535368   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key: {Name:mk1d87f6b19e07b54fc72f7df7c27133de3a504e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.536336   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 11:23:11.548957   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 11:23:11.549283   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem (1338 bytes)
	W0401 11:23:11.549862   12872 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260_empty.pem, impossibly tiny 0 bytes
	I0401 11:23:11.549999   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 11:23:11.550292   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 11:23:11.550600   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 11:23:11.550848   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 11:23:11.551406   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem (1708 bytes)
	I0401 11:23:11.551630   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:23:11.551845   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem -> /usr/share/ca-certificates/1260.pem
	I0401 11:23:11.552011   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /usr/share/ca-certificates/12602.pem
	I0401 11:23:11.553211   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 11:23:11.601569   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 11:23:11.658572   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 11:23:11.711279   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 11:23:11.763414   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 11:23:11.819548   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 11:23:11.869125   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 11:23:11.917510   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 11:23:11.970908   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 11:23:12.018072   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem --> /usr/share/ca-certificates/1260.pem (1338 bytes)
	I0401 11:23:12.069596   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /usr/share/ca-certificates/12602.pem (1708 bytes)
	I0401 11:23:12.117966   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 11:23:12.166035   12872 ssh_runner.go:195] Run: openssl version
	I0401 11:23:12.188523   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0401 11:23:12.222638   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0401 11:23:12.231406   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 11:23:12.244594   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0401 11:23:12.266371   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 11:23:12.297800   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 11:23:12.330603   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:23:12.339449   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:23:12.351872   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:23:12.374949   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 11:23:12.407913   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1260.pem && ln -fs /usr/share/ca-certificates/1260.pem /etc/ssl/certs/1260.pem"
	I0401 11:23:12.439205   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1260.pem
	I0401 11:23:12.446475   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 11:23:12.460911   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1260.pem
	I0401 11:23:12.487116   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1260.pem /etc/ssl/certs/51391683.0"
	I0401 11:23:12.526335   12872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 11:23:12.535271   12872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 11:23:12.535271   12872 kubeadm.go:391] StartCluster: {Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:23:12.546243   12872 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0401 11:23:12.590192   12872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 11:23:12.624196   12872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 11:23:12.655425   12872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 11:23:12.674488   12872 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 11:23:12.674488   12872 kubeadm.go:156] found existing configuration files:
	
	I0401 11:23:12.687469   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 11:23:12.703453   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 11:23:12.715853   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 11:23:12.747799   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 11:23:12.769066   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 11:23:12.780868   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 11:23:12.812081   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 11:23:12.829667   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 11:23:12.841860   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 11:23:12.871800   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 11:23:12.889761   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 11:23:12.904925   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 11:23:12.923333   12872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 11:23:13.449915   12872 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 11:23:28.609333   12872 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 11:23:28.609771   12872 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 11:23:28.609984   12872 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 11:23:28.610181   12872 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 11:23:28.610540   12872 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 11:23:28.610671   12872 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 11:23:28.613697   12872 out.go:204]   - Generating certificates and keys ...
	I0401 11:23:28.613697   12872 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 11:23:28.613697   12872 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 11:23:28.614375   12872 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 11:23:28.614534   12872 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 11:23:28.614696   12872 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 11:23:28.614841   12872 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 11:23:28.615021   12872 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 11:23:28.615399   12872 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-401500 localhost] and IPs [172.19.153.73 127.0.0.1 ::1]
	I0401 11:23:28.615553   12872 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 11:23:28.615855   12872 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-401500 localhost] and IPs [172.19.153.73 127.0.0.1 ::1]
	I0401 11:23:28.616037   12872 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 11:23:28.616216   12872 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 11:23:28.616394   12872 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 11:23:28.616511   12872 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 11:23:28.616701   12872 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 11:23:28.616872   12872 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 11:23:28.617062   12872 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 11:23:28.617263   12872 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 11:23:28.617307   12872 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 11:23:28.617307   12872 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 11:23:28.617307   12872 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 11:23:28.620723   12872 out.go:204]   - Booting up control plane ...
	I0401 11:23:28.620723   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 11:23:28.620723   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 11:23:28.620723   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 11:23:28.620723   12872 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 11:23:28.620723   12872 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 11:23:28.621743   12872 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 11:23:28.621743   12872 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 11:23:28.621743   12872 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.603521 seconds
	I0401 11:23:28.622348   12872 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 11:23:28.622348   12872 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 11:23:28.622348   12872 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 11:23:28.623361   12872 kubeadm.go:309] [mark-control-plane] Marking the node ha-401500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 11:23:28.623361   12872 kubeadm.go:309] [bootstrap-token] Using token: jgil8o.iynv4v6pgp2ssyrk
	I0401 11:23:28.628393   12872 out.go:204]   - Configuring RBAC rules ...
	I0401 11:23:28.628393   12872 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 11:23:28.628393   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 11:23:28.628393   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 11:23:28.629362   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 11:23:28.629362   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 11:23:28.629362   12872 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 11:23:28.629362   12872 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 11:23:28.629362   12872 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 11:23:28.629362   12872 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 11:23:28.629362   12872 kubeadm.go:309] 
	I0401 11:23:28.630386   12872 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 11:23:28.630386   12872 kubeadm.go:309] 
	I0401 11:23:28.630386   12872 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 11:23:28.630386   12872 kubeadm.go:309] 
	I0401 11:23:28.630386   12872 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 11:23:28.630386   12872 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 11:23:28.630998   12872 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 11:23:28.631085   12872 kubeadm.go:309] 
	I0401 11:23:28.631270   12872 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 11:23:28.631327   12872 kubeadm.go:309] 
	I0401 11:23:28.631471   12872 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 11:23:28.631471   12872 kubeadm.go:309] 
	I0401 11:23:28.631586   12872 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 11:23:28.631755   12872 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 11:23:28.631917   12872 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 11:23:28.631917   12872 kubeadm.go:309] 
	I0401 11:23:28.632096   12872 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 11:23:28.632284   12872 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 11:23:28.632284   12872 kubeadm.go:309] 
	I0401 11:23:28.632474   12872 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jgil8o.iynv4v6pgp2ssyrk \
	I0401 11:23:28.632683   12872 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c \
	I0401 11:23:28.632683   12872 kubeadm.go:309] 	--control-plane 
	I0401 11:23:28.632683   12872 kubeadm.go:309] 
	I0401 11:23:28.632893   12872 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 11:23:28.632893   12872 kubeadm.go:309] 
	I0401 11:23:28.633112   12872 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jgil8o.iynv4v6pgp2ssyrk \
	I0401 11:23:28.633309   12872 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c 
	I0401 11:23:28.633309   12872 cni.go:84] Creating CNI manager for ""
	I0401 11:23:28.633309   12872 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 11:23:28.634814   12872 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 11:23:28.654018   12872 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 11:23:28.665700   12872 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0401 11:23:28.665700   12872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0401 11:23:28.738304   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 11:23:29.551880   12872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 11:23:29.566744   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-401500 minikube.k8s.io/updated_at=2024_04_01T11_23_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=ha-401500 minikube.k8s.io/primary=true
	I0401 11:23:29.566744   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:29.618262   12872 ops.go:34] apiserver oom_adj: -16
	I0401 11:23:29.827749   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:30.340068   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:30.827634   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:31.335244   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:31.838270   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:32.340731   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:32.828250   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:33.329378   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:33.830279   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:34.328423   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:34.841549   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:35.342967   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:35.835091   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:36.336315   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:36.839066   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:37.328203   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:37.837944   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:38.339394   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:38.842372   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:39.328943   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:39.831299   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:40.339048   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:40.553047   12872 kubeadm.go:1107] duration metric: took 11.0009865s to wait for elevateKubeSystemPrivileges
	W0401 11:23:40.553047   12872 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 11:23:40.553047   12872 kubeadm.go:393] duration metric: took 28.0175775s to StartCluster
	I0401 11:23:40.553047   12872 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:40.553047   12872 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:23:40.555044   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:40.556046   12872 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:23:40.556046   12872 start.go:240] waiting for startup goroutines ...
	I0401 11:23:40.556046   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 11:23:40.556046   12872 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 11:23:40.556046   12872 addons.go:69] Setting storage-provisioner=true in profile "ha-401500"
	I0401 11:23:40.556046   12872 addons.go:69] Setting default-storageclass=true in profile "ha-401500"
	I0401 11:23:40.556046   12872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-401500"
	I0401 11:23:40.556046   12872 addons.go:234] Setting addon storage-provisioner=true in "ha-401500"
	I0401 11:23:40.557039   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:23:40.557039   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:23:40.557039   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:40.558047   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:40.805041   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 11:23:41.333371   12872 start.go:946] {"host.minikube.internal": 172.19.144.1} host record injected into CoreDNS's ConfigMap
	I0401 11:23:42.967628   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:42.967800   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:42.967800   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:42.967800   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:42.970171   12872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 11:23:42.968492   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:23:42.973232   12872 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:23:42.973267   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 11:23:42.973362   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:42.974278   12872 kapi.go:59] client config for ha-401500: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x236fd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 11:23:42.975889   12872 cert_rotation.go:137] Starting client certificate rotation controller
	I0401 11:23:42.975889   12872 addons.go:234] Setting addon default-storageclass=true in "ha-401500"
	I0401 11:23:42.976418   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:23:42.977068   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:45.393478   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:45.393771   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:45.393844   12872 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 11:23:45.393844   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 11:23:45.393844   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:45.402206   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:45.402206   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:45.402206   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:23:47.775275   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:47.776051   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:47.776051   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:23:48.287708   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:23:48.287708   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:48.288543   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:23:48.448315   12872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:23:50.571629   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:23:50.572307   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:50.572854   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:23:50.728200   12872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 11:23:51.095294   12872 round_trippers.go:463] GET https://172.19.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0401 11:23:51.095294   12872 round_trippers.go:469] Request Headers:
	I0401 11:23:51.095294   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:23:51.095294   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:23:51.109752   12872 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0401 11:23:51.111289   12872 round_trippers.go:463] PUT https://172.19.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0401 11:23:51.111289   12872 round_trippers.go:469] Request Headers:
	I0401 11:23:51.111289   12872 round_trippers.go:473]     Content-Type: application/json
	I0401 11:23:51.111289   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:23:51.111289   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:23:51.114808   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:23:51.122092   12872 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 11:23:51.128488   12872 addons.go:505] duration metric: took 10.5723671s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 11:23:51.128488   12872 start.go:245] waiting for cluster config update ...
	I0401 11:23:51.128488   12872 start.go:254] writing updated cluster config ...
	I0401 11:23:51.131226   12872 out.go:177] 
	I0401 11:23:51.143078   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:23:51.143610   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:23:51.151527   12872 out.go:177] * Starting "ha-401500-m02" control-plane node in "ha-401500" cluster
	I0401 11:23:51.157510   12872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:23:51.158467   12872 cache.go:56] Caching tarball of preloaded images
	I0401 11:23:51.158467   12872 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:23:51.158467   12872 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:23:51.158467   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:23:51.161485   12872 start.go:360] acquireMachinesLock for ha-401500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:23:51.161485   12872 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-401500-m02"
	I0401 11:23:51.162475   12872 start.go:93] Provisioning new machine with config: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:23:51.162475   12872 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0401 11:23:51.166526   12872 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 11:23:51.166526   12872 start.go:159] libmachine.API.Create for "ha-401500" (driver="hyperv")
	I0401 11:23:51.166526   12872 client.go:168] LocalClient.Create starting
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:23:51.167479   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 11:23:53.230681   12872 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 11:23:53.230863   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:53.230863   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 11:23:55.067655   12872 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 11:23:55.067655   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:55.067655   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:23:56.632719   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:23:56.632719   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:56.632719   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:24:00.448816   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:24:00.448816   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:00.451201   12872 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 11:24:00.984253   12872 main.go:141] libmachine: Creating SSH key...
	I0401 11:24:01.124196   12872 main.go:141] libmachine: Creating VM...
	I0401 11:24:01.124196   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:24:04.122176   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:24:04.122303   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:04.122303   12872 main.go:141] libmachine: Using switch "Default Switch"
	I0401 11:24:04.122399   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:24:06.000973   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:24:06.000973   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:06.000973   12872 main.go:141] libmachine: Creating VHD
	I0401 11:24:06.000973   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 11:24:09.897500   12872 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : EECA16FE-B004-4547-B8DE-1C1C2D9B142B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 11:24:09.898250   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:09.898250   12872 main.go:141] libmachine: Writing magic tar header
	I0401 11:24:09.898250   12872 main.go:141] libmachine: Writing SSH key tar header
	I0401 11:24:09.907972   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 11:24:13.169670   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:13.170819   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:13.170890   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\disk.vhd' -SizeBytes 20000MB
	I0401 11:24:15.764189   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:15.764624   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:15.764741   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-401500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0401 11:24:19.532643   12872 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-401500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 11:24:19.532643   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:19.533185   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-401500-m02 -DynamicMemoryEnabled $false
	I0401 11:24:21.858037   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:21.858037   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:21.858703   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-401500-m02 -Count 2
	I0401 11:24:24.102721   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:24.102721   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:24.102721   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-401500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\boot2docker.iso'
	I0401 11:24:26.826117   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:26.826117   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:26.826627   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-401500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\disk.vhd'
	I0401 11:24:29.583986   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:29.584213   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:29.584213   12872 main.go:141] libmachine: Starting VM...
	I0401 11:24:29.584213   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-401500-m02
	I0401 11:24:32.808475   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:32.808475   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:32.808569   12872 main.go:141] libmachine: Waiting for host to start...
	I0401 11:24:32.808739   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:35.243595   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:35.243595   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:35.244422   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:24:37.950490   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:37.950490   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:38.964066   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:41.310344   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:41.310344   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:41.311358   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:24:43.972564   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:43.972628   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:44.973234   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:47.289636   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:47.289636   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:47.290278   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:24:49.935956   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:49.935956   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:50.936667   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:53.294479   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:53.294479   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:53.295263   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:24:55.960299   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:55.960299   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:56.963871   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:59.271351   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:59.271651   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:59.271651   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:02.018689   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:02.018749   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:02.018865   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:04.286663   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:04.286663   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:04.286663   12872 machine.go:94] provisionDockerMachine start ...
	I0401 11:25:04.286663   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:06.582664   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:06.583149   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:06.583250   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:09.392036   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:09.392036   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:09.397400   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:09.410555   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:09.410555   12872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:25:09.534127   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 11:25:09.534229   12872 buildroot.go:166] provisioning hostname "ha-401500-m02"
	I0401 11:25:09.534322   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:11.828444   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:11.828444   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:11.829420   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:14.610883   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:14.610883   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:14.617059   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:14.617178   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:14.617178   12872 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401500-m02 && echo "ha-401500-m02" | sudo tee /etc/hostname
	I0401 11:25:14.767974   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401500-m02
	
	I0401 11:25:14.768258   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:17.045385   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:17.045471   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:17.045471   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:19.760122   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:19.760122   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:19.766096   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:19.766692   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:19.766806   12872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:25:19.914653   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:25:19.914653   12872 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:25:19.914653   12872 buildroot.go:174] setting up certificates
	I0401 11:25:19.914653   12872 provision.go:84] configureAuth start
	I0401 11:25:19.914653   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:22.199218   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:22.199218   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:22.199515   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:24.943558   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:24.944664   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:24.944736   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:27.231270   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:27.231270   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:27.231270   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:29.910462   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:29.910533   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:29.910533   12872 provision.go:143] copyHostCerts
	I0401 11:25:29.910533   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 11:25:29.911064   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:25:29.911147   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:25:29.911679   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:25:29.912557   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 11:25:29.913507   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:25:29.913507   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:25:29.913507   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:25:29.915031   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 11:25:29.915890   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:25:29.915890   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:25:29.915890   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:25:29.917398   12872 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-401500-m02 san=[127.0.0.1 172.19.149.50 ha-401500-m02 localhost minikube]
	I0401 11:25:30.020874   12872 provision.go:177] copyRemoteCerts
	I0401 11:25:30.041473   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:25:30.041473   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:32.277303   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:32.277303   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:32.277986   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:35.009020   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:35.009020   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:35.009905   12872 sshutil.go:53] new ssh client: &{IP:172.19.149.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\id_rsa Username:docker}
	I0401 11:25:35.122752   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0812441s)
	I0401 11:25:35.122752   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 11:25:35.122752   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 11:25:35.182764   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 11:25:35.183310   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 11:25:35.241412   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 11:25:35.242196   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:25:35.292811   12872 provision.go:87] duration metric: took 15.3780499s to configureAuth
	I0401 11:25:35.292876   12872 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:25:35.293462   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:25:35.293462   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:37.548794   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:37.548794   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:37.549234   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:40.307749   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:40.307921   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:40.318205   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:40.319111   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:40.319111   12872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:25:40.444042   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:25:40.444099   12872 buildroot.go:70] root file system type: tmpfs
	I0401 11:25:40.444386   12872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:25:40.444386   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:42.703063   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:42.703548   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:42.703661   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:45.393155   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:45.393155   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:45.402922   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:45.402922   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:45.403706   12872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.153.73"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:25:45.570984   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.153.73
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:25:45.571051   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:47.872077   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:47.872077   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:47.872378   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:50.572850   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:50.572850   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:50.581231   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:50.582153   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:50.582153   12872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:25:52.814142   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 11:25:52.814142   12872 machine.go:97] duration metric: took 48.5271376s to provisionDockerMachine
	I0401 11:25:52.814142   12872 client.go:171] duration metric: took 2m1.6467552s to LocalClient.Create
	I0401 11:25:52.814142   12872 start.go:167] duration metric: took 2m1.6467552s to libmachine.API.Create "ha-401500"
	I0401 11:25:52.814142   12872 start.go:293] postStartSetup for "ha-401500-m02" (driver="hyperv")
	I0401 11:25:52.814142   12872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:25:52.826787   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:25:52.826787   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:55.098120   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:55.098120   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:55.098390   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:57.773663   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:57.773663   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:57.775050   12872 sshutil.go:53] new ssh client: &{IP:172.19.149.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\id_rsa Username:docker}
	I0401 11:25:57.883359   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0565359s)
	I0401 11:25:57.896080   12872 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:25:57.903326   12872 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:25:57.903326   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:25:57.903846   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:25:57.904815   12872 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:25:57.904815   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 11:25:57.916785   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 11:25:57.936975   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:25:57.983525   12872 start.go:296] duration metric: took 5.1693458s for postStartSetup
	I0401 11:25:57.987185   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:00.206992   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:00.206992   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:00.207145   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:02.897814   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:02.897814   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:02.898907   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:26:02.901729   12872 start.go:128] duration metric: took 2m11.738322s to createHost
	I0401 11:26:02.901729   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:05.118268   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:05.118495   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:05.118495   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:07.852840   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:07.853199   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:07.859062   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:26:07.859062   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:26:07.859062   12872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:26:07.986939   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711970767.979901784
	
	I0401 11:26:07.986939   12872 fix.go:216] guest clock: 1711970767.979901784
	I0401 11:26:07.986939   12872 fix.go:229] Guest: 2024-04-01 11:26:07.979901784 +0000 UTC Remote: 2024-04-01 11:26:02.9017293 +0000 UTC m=+353.133323501 (delta=5.078172484s)
	I0401 11:26:07.986939   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:10.254232   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:10.254459   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:10.254459   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:12.988474   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:12.988474   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:12.994969   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:26:12.995906   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:26:12.995906   12872 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711970767
	I0401 11:26:13.147265   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:26:07 UTC 2024
	
	I0401 11:26:13.147447   12872 fix.go:236] clock set: Mon Apr  1 11:26:07 UTC 2024
	 (err=<nil>)
	I0401 11:26:13.147447   12872 start.go:83] releasing machines lock for "ha-401500-m02", held for 2m21.984958s
	I0401 11:26:13.147730   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:15.400571   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:15.400571   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:15.400832   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:18.091153   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:18.091153   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:18.094043   12872 out.go:177] * Found network options:
	I0401 11:26:18.096873   12872 out.go:177]   - NO_PROXY=172.19.153.73
	W0401 11:26:18.099078   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 11:26:18.101655   12872 out.go:177]   - NO_PROXY=172.19.153.73
	W0401 11:26:18.104059   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:26:18.106039   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 11:26:18.108567   12872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:26:18.108567   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:18.119016   12872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 11:26:18.119016   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:20.431266   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:20.431379   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:20.431266   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:20.431379   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:20.431379   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:20.431505   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:23.226507   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:23.227048   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:23.227614   12872 sshutil.go:53] new ssh client: &{IP:172.19.149.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\id_rsa Username:docker}
	I0401 11:26:23.252104   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:23.252104   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:23.252755   12872 sshutil.go:53] new ssh client: &{IP:172.19.149.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\id_rsa Username:docker}
	I0401 11:26:23.416258   12872 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2963791s)
	W0401 11:26:23.416258   12872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:26:23.416391   12872 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3077864s)
	I0401 11:26:23.429709   12872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:26:23.464692   12872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 11:26:23.464772   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:26:23.464926   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:26:23.515323   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:26:23.550076   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:26:23.572889   12872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:26:23.586640   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:26:23.622061   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:26:23.656922   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:26:23.689028   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:26:23.727818   12872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:26:23.765687   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:26:23.799951   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:26:23.835474   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:26:23.868237   12872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:26:23.900104   12872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:26:23.931666   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:24.146851   12872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:26:24.192884   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:26:24.206348   12872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:26:24.246889   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:26:24.284058   12872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:26:24.328234   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:26:24.367282   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:26:24.404276   12872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 11:26:24.468484   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:26:24.495640   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:26:24.549396   12872 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:26:24.570343   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:26:24.595364   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:26:24.641753   12872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:26:24.857265   12872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:26:25.071243   12872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:26:25.071243   12872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:26:25.120353   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:25.332761   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:26:27.923293   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899259s)
	I0401 11:26:27.935255   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 11:26:27.971850   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:26:28.013726   12872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 11:26:28.234460   12872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 11:26:28.444688   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:28.664253   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 11:26:28.708931   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:26:28.747517   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:28.968350   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 11:26:29.082405   12872 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 11:26:29.095396   12872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 11:26:29.104391   12872 start.go:562] Will wait 60s for crictl version
	I0401 11:26:29.116369   12872 ssh_runner.go:195] Run: which crictl
	I0401 11:26:29.134931   12872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 11:26:29.219441   12872 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 11:26:29.228704   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:26:29.273858   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:26:29.312134   12872 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 11:26:29.314717   12872 out.go:177]   - env NO_PROXY=172.19.153.73
	I0401 11:26:29.318763   12872 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 11:26:29.322725   12872 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 11:26:29.322725   12872 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 11:26:29.322725   12872 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 11:26:29.322725   12872 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 11:26:29.325764   12872 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 11:26:29.325764   12872 ip.go:210] interface addr: 172.19.144.1/20
	I0401 11:26:29.337759   12872 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 11:26:29.343342   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:26:29.368898   12872 mustload.go:65] Loading cluster: ha-401500
	I0401 11:26:29.369082   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:26:29.370062   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:26:31.547016   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:31.547526   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:31.547526   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:26:31.548432   12872 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500 for IP: 172.19.149.50
	I0401 11:26:31.548505   12872 certs.go:194] generating shared ca certs ...
	I0401 11:26:31.548505   12872 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:26:31.549160   12872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 11:26:31.549499   12872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 11:26:31.549653   12872 certs.go:256] generating profile certs ...
	I0401 11:26:31.550257   12872 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key
	I0401 11:26:31.550438   12872 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.0cad9fc6
	I0401 11:26:31.550569   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.0cad9fc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.153.73 172.19.149.50 172.19.159.254]
	I0401 11:26:31.955806   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.0cad9fc6 ...
	I0401 11:26:31.955806   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.0cad9fc6: {Name:mkcf0f68864f471e42f9c64286a52246005b41fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:26:31.956879   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.0cad9fc6 ...
	I0401 11:26:31.956879   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.0cad9fc6: {Name:mkf6efe69cff6bca356149ae606453d01bea64f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:26:31.958097   12872 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.0cad9fc6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt
	I0401 11:26:31.971020   12872 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.0cad9fc6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key
	I0401 11:26:31.973590   12872 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key
	I0401 11:26:31.973671   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 11:26:31.973737   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0401 11:26:31.973737   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 11:26:31.973737   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 11:26:31.974269   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 11:26:31.974402   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 11:26:31.974498   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 11:26:31.974663   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 11:26:31.974663   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem (1338 bytes)
	W0401 11:26:31.975240   12872 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260_empty.pem, impossibly tiny 0 bytes
	I0401 11:26:31.975437   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 11:26:31.975779   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 11:26:31.976042   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 11:26:31.976042   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 11:26:31.976843   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem (1708 bytes)
	I0401 11:26:31.976996   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem -> /usr/share/ca-certificates/1260.pem
	I0401 11:26:31.976996   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /usr/share/ca-certificates/12602.pem
	I0401 11:26:31.976996   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:26:31.977624   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:26:34.285235   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:34.285310   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:34.285310   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:37.089075   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:26:37.089134   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:37.089134   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:26:37.195542   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0401 11:26:37.203253   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0401 11:26:37.237772   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0401 11:26:37.245497   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0401 11:26:37.279217   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0401 11:26:37.287216   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0401 11:26:37.320170   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0401 11:26:37.326555   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0401 11:26:37.361449   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0401 11:26:37.369997   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0401 11:26:37.406102   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0401 11:26:37.413141   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0401 11:26:37.433969   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 11:26:37.489020   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 11:26:37.543100   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 11:26:37.595921   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 11:26:37.647343   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0401 11:26:37.700082   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 11:26:37.763314   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 11:26:37.823076   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 11:26:37.875767   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem --> /usr/share/ca-certificates/1260.pem (1338 bytes)
	I0401 11:26:37.930567   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /usr/share/ca-certificates/12602.pem (1708 bytes)
	I0401 11:26:37.980320   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 11:26:38.030123   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0401 11:26:38.067802   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0401 11:26:38.103871   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0401 11:26:38.139269   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0401 11:26:38.174475   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0401 11:26:38.209026   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0401 11:26:38.244216   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0401 11:26:38.290178   12872 ssh_runner.go:195] Run: openssl version
	I0401 11:26:38.314297   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1260.pem && ln -fs /usr/share/ca-certificates/1260.pem /etc/ssl/certs/1260.pem"
	I0401 11:26:38.348513   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1260.pem
	I0401 11:26:38.357217   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 11:26:38.371159   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1260.pem
	I0401 11:26:38.393385   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1260.pem /etc/ssl/certs/51391683.0"
	I0401 11:26:38.432517   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0401 11:26:38.467822   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0401 11:26:38.474975   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 11:26:38.487977   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0401 11:26:38.512119   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 11:26:38.546415   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 11:26:38.584732   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:26:38.593856   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:26:38.606932   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:26:38.630222   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 11:26:38.667692   12872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 11:26:38.677524   12872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 11:26:38.678146   12872 kubeadm.go:928] updating node {m02 172.19.149.50 8443 v1.29.3 docker true true} ...
	I0401 11:26:38.678146   12872 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-401500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.149.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 11:26:38.678146   12872 kube-vip.go:111] generating kube-vip config ...
	I0401 11:26:38.691285   12872 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 11:26:38.717655   12872 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 11:26:38.717655   12872 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 11:26:38.731406   12872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 11:26:38.750144   12872 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0401 11:26:38.765991   12872 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0401 11:26:38.790074   12872 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet
	I0401 11:26:38.790319   12872 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl
	I0401 11:26:38.790463   12872 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm
	I0401 11:26:39.766323   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 11:26:39.778138   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 11:26:39.780131   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 11:26:39.789436   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0401 11:26:39.790413   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0401 11:26:39.798402   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 11:26:39.869438   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0401 11:26:39.869438   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0401 11:26:40.376909   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 11:26:40.422914   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 11:26:40.434500   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 11:26:40.463499   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0401 11:26:40.463499   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0401 11:26:41.216486   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0401 11:26:41.242631   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 11:26:41.285210   12872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 11:26:41.320128   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 11:26:41.367442   12872 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0401 11:26:41.375281   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:26:41.413219   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:41.641911   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:26:41.674467   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:26:41.675295   12872 start.go:316] joinCluster: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:26:41.675531   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0401 11:26:41.675643   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:26:43.930948   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:43.930948   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:43.931067   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:46.675502   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:26:46.675590   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:46.676111   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:26:46.910230   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2346615s)
	I0401 11:26:46.910304   12872 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:26:46.910410   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lfqzir.q7dxua6s02mjgst6 --discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-401500-m02 --control-plane --apiserver-advertise-address=172.19.149.50 --apiserver-bind-port=8443"
	I0401 11:27:35.007302   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lfqzir.q7dxua6s02mjgst6 --discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-401500-m02 --control-plane --apiserver-advertise-address=172.19.149.50 --apiserver-bind-port=8443": (48.0965016s)
	I0401 11:27:35.007482   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0401 11:27:36.031458   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.0229296s)
	I0401 11:27:36.048246   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-401500-m02 minikube.k8s.io/updated_at=2024_04_01T11_27_36_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=ha-401500 minikube.k8s.io/primary=false
	I0401 11:27:36.251226   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-401500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0401 11:27:36.419429   12872 start.go:318] duration metric: took 54.7438095s to joinCluster
	I0401 11:27:36.419869   12872 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:27:36.424647   12872 out.go:177] * Verifying Kubernetes components...
	I0401 11:27:36.420742   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:27:36.440084   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:27:36.890925   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:27:36.938898   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:27:36.939462   12872 kapi.go:59] client config for ha-401500: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x236fd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0401 11:27:36.939770   12872 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.159.254:8443 with https://172.19.153.73:8443
	I0401 11:27:36.940555   12872 node_ready.go:35] waiting up to 6m0s for node "ha-401500-m02" to be "Ready" ...
	I0401 11:27:36.940755   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:36.940755   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:36.940755   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:36.940755   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:36.962308   12872 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0401 11:27:37.442668   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:37.442668   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:37.442668   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:37.442668   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:37.449266   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:37.952975   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:37.953010   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:37.953065   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:37.953065   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:37.959780   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:38.445133   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:38.445133   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:38.445133   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:38.445133   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:38.450629   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:38.950450   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:38.950704   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:38.950704   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:38.950704   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:38.954828   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:38.956527   12872 node_ready.go:53] node "ha-401500-m02" has status "Ready":"False"
	I0401 11:27:39.442450   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:39.442512   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:39.442512   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:39.442512   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:39.451276   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:39.944649   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:39.944712   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:39.944746   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:39.944746   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:39.949086   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:40.452147   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:40.452254   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:40.452254   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:40.452254   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:40.462365   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:27:40.942213   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:40.942213   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:40.942302   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:40.942322   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:41.086208   12872 round_trippers.go:574] Response Status: 200 OK in 143 milliseconds
	I0401 11:27:41.087261   12872 node_ready.go:53] node "ha-401500-m02" has status "Ready":"False"
	I0401 11:27:41.448193   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:41.448193   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:41.448193   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:41.448193   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:41.452823   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:41.952994   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:41.952994   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:41.952994   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:41.952994   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:41.957756   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:42.444599   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:42.444599   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:42.444599   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:42.444599   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:42.459339   12872 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0401 11:27:42.950589   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:42.950589   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:42.950670   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:42.950670   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:42.956599   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:43.454422   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:43.454508   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:43.454508   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:43.454508   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:43.459854   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:43.461387   12872 node_ready.go:53] node "ha-401500-m02" has status "Ready":"False"
	I0401 11:27:43.944550   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:43.944550   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:43.944792   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:43.944792   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:43.953170   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:44.444380   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:44.444522   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.444522   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.444522   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.449834   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:44.451252   12872 node_ready.go:49] node "ha-401500-m02" has status "Ready":"True"
	I0401 11:27:44.451252   12872 node_ready.go:38] duration metric: took 7.5104442s for node "ha-401500-m02" to be "Ready" ...
	I0401 11:27:44.451388   12872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:27:44.451530   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:44.451530   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.451530   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.451530   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.465553   12872 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0401 11:27:44.474721   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4xvlf" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.474721   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4xvlf
	I0401 11:27:44.474721   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.474721   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.474721   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.480922   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:44.481478   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:44.482084   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.482084   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.482084   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.486298   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:44.486849   12872 pod_ready.go:92] pod "coredns-76f75df574-4xvlf" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:44.486849   12872 pod_ready.go:81] duration metric: took 12.1271ms for pod "coredns-76f75df574-4xvlf" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.486849   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vjslq" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.487386   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vjslq
	I0401 11:27:44.487386   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.487386   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.487449   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.491143   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:27:44.492219   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:44.492295   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.492295   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.492295   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.497127   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:44.498178   12872 pod_ready.go:92] pod "coredns-76f75df574-vjslq" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:44.498178   12872 pod_ready.go:81] duration metric: took 11.3293ms for pod "coredns-76f75df574-vjslq" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.498178   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.498178   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500
	I0401 11:27:44.498178   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.498178   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.498178   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.502774   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:44.503766   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:44.503766   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.503766   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.503766   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.509771   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:44.510856   12872 pod_ready.go:92] pod "etcd-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:44.510856   12872 pod_ready.go:81] duration metric: took 12.6778ms for pod "etcd-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.510856   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.510856   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:44.510856   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.510856   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.510856   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.514665   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:27:44.515984   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:44.515984   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.515984   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.515984   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.520576   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:45.023068   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:45.023068   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:45.023068   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:45.023068   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:45.027637   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:45.028972   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:45.028972   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:45.028972   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:45.028972   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:45.033457   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:45.519479   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:45.519479   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:45.519479   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:45.519797   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:45.527332   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:27:45.527602   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:45.527602   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:45.528183   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:45.528183   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:45.532214   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:46.022058   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:46.022058   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.022058   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.022058   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.027868   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:46.029105   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:46.029182   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.029277   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.029277   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.037606   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:46.524102   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:46.524279   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.524279   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.524279   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.530266   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:46.533126   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:46.533206   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.533206   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.533292   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.552770   12872 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0401 11:27:46.554251   12872 pod_ready.go:92] pod "etcd-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:46.554339   12872 pod_ready.go:81] duration metric: took 2.0434691s for pod "etcd-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:46.554446   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:46.554582   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500
	I0401 11:27:46.554582   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.554582   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.554582   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.562948   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:46.564016   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:46.564558   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.564558   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.564612   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.569625   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:46.570609   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:46.570609   12872 pod_ready.go:81] duration metric: took 16.1629ms for pod "kube-apiserver-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:46.570609   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:46.648210   12872 request.go:629] Waited for 76.9405ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:27:46.648210   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:27:46.648210   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.648210   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.648210   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.653786   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:46.851473   12872 request.go:629] Waited for 195.9621ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:46.851817   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:46.851817   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.851817   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.851817   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.857564   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:47.086526   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:27:47.086650   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.086650   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.086650   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.092544   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:47.259722   12872 request.go:629] Waited for 166.2169ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:47.259722   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:47.259722   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.259722   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.259722   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.265543   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:47.267070   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:47.267233   12872 pod_ready.go:81] duration metric: took 696.6198ms for pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:47.267233   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:47.446131   12872 request.go:629] Waited for 178.7861ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500
	I0401 11:27:47.446475   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500
	I0401 11:27:47.446475   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.446475   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.446475   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.452527   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:47.648710   12872 request.go:629] Waited for 194.5476ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:47.648710   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:47.648710   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.648710   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.648710   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.654327   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:47.656224   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:47.656358   12872 pod_ready.go:81] duration metric: took 389.1221ms for pod "kube-controller-manager-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:47.656358   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:47.852951   12872 request.go:629] Waited for 196.5913ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m02
	I0401 11:27:47.852951   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m02
	I0401 11:27:47.852951   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.852951   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.852951   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.858663   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:48.057777   12872 request.go:629] Waited for 197.3655ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:48.058010   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:48.058010   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.058081   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.058081   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.064653   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:48.065237   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:48.065295   12872 pod_ready.go:81] duration metric: took 408.9341ms for pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.065295   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28zds" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.245186   12872 request.go:629] Waited for 179.7853ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zds
	I0401 11:27:48.245186   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zds
	I0401 11:27:48.245186   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.245186   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.245186   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.250210   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:48.450759   12872 request.go:629] Waited for 198.752ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:48.450860   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:48.450860   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.450860   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.450949   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.456305   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:48.457207   12872 pod_ready.go:92] pod "kube-proxy-28zds" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:48.457276   12872 pod_ready.go:81] duration metric: took 391.9779ms for pod "kube-proxy-28zds" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.457331   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqcpv" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.656255   12872 request.go:629] Waited for 198.8543ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqcpv
	I0401 11:27:48.656255   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqcpv
	I0401 11:27:48.656255   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.656255   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.656255   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.662824   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:48.844811   12872 request.go:629] Waited for 179.7143ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:48.845046   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:48.845046   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.845128   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.845128   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.851746   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:48.852626   12872 pod_ready.go:92] pod "kube-proxy-hqcpv" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:48.852661   12872 pod_ready.go:81] duration metric: took 395.2927ms for pod "kube-proxy-hqcpv" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.852687   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:49.046849   12872 request.go:629] Waited for 194.1607ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500
	I0401 11:27:49.047019   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500
	I0401 11:27:49.047019   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.047019   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.047019   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.053647   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:49.249201   12872 request.go:629] Waited for 193.3384ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:49.249201   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:49.249476   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.249476   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.249476   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.255856   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:49.257030   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:49.257030   12872 pod_ready.go:81] duration metric: took 404.3396ms for pod "kube-scheduler-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:49.257030   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:49.452619   12872 request.go:629] Waited for 195.4041ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m02
	I0401 11:27:49.452723   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m02
	I0401 11:27:49.452723   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.452723   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.452723   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.460845   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:49.656358   12872 request.go:629] Waited for 194.4507ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:49.656358   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:49.656755   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.656755   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.656755   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.663121   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:49.664267   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:49.664429   12872 pod_ready.go:81] duration metric: took 407.3962ms for pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:49.664429   12872 pod_ready.go:38] duration metric: took 5.2130044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:27:49.664544   12872 api_server.go:52] waiting for apiserver process to appear ...
	I0401 11:27:49.678600   12872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 11:27:49.707910   12872 api_server.go:72] duration metric: took 13.2878831s to wait for apiserver process to appear ...
	I0401 11:27:49.708743   12872 api_server.go:88] waiting for apiserver healthz status ...
	I0401 11:27:49.708743   12872 api_server.go:253] Checking apiserver healthz at https://172.19.153.73:8443/healthz ...
	I0401 11:27:49.716486   12872 api_server.go:279] https://172.19.153.73:8443/healthz returned 200:
	ok
	I0401 11:27:49.716754   12872 round_trippers.go:463] GET https://172.19.153.73:8443/version
	I0401 11:27:49.716771   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.716820   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.716838   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.718603   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 11:27:49.718734   12872 api_server.go:141] control plane version: v1.29.3
	I0401 11:27:49.718853   12872 api_server.go:131] duration metric: took 10.1105ms to wait for apiserver health ...
	I0401 11:27:49.718853   12872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 11:27:49.858508   12872 request.go:629] Waited for 139.6231ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:49.858611   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:49.858611   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.858729   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.858729   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.867821   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:27:49.875964   12872 system_pods.go:59] 17 kube-system pods found
	I0401 11:27:49.875964   12872 system_pods.go:61] "coredns-76f75df574-4xvlf" [d2a6344b-f0f6-49a1-9135-2a2ae21228b9] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "coredns-76f75df574-vjslq" [81ef7e9b-acf1-411f-8f00-bb9fea08056f] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "etcd-ha-401500" [532eef29-0a6a-4b38-82a7-522c28eb8d64] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "etcd-ha-401500-m02" [258b489e-95c8-4bfc-931f-2392bd619257] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kindnet-92s2r" [5d6301b7-cb61-401f-9b6d-1a77775b65ac] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kindnet-v22wx" [86d50e2c-cb46-475b-9ec9-e16549903f65] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-apiserver-ha-401500" [bd79feb9-6db9-49ab-87ec-debf9556277f] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-apiserver-ha-401500-m02" [c092dcfe-f711-419d-b172-05670e1c4b53] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-controller-manager-ha-401500" [aa7dc05b-ee68-49fa-9a08-60e079f62848] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-controller-manager-ha-401500-m02" [2755a2be-c5d2-4df7-9572-f2bde8aa9314] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-proxy-28zds" [bb38f484-6c10-4874-a3a7-dba22c1720a0] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-proxy-hqcpv" [edf6bd75-05e1-479f-b190-13d867bb7ef5] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-scheduler-ha-401500" [d727c9ec-579a-4449-90b1-86b790573abb] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-scheduler-ha-401500-m02" [b38ecb47-0b33-4432-a060-67e352fc9d73] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-vip-ha-401500" [b1386d4f-d6ab-4cfd-91e4-39539d0e2854] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-vip-ha-401500-m02" [d5cc5b36-52ad-4da8-b75a-8cfce3b3391f] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "storage-provisioner" [373b3186-34e3-4ae2-8ddf-4701d665e768] Running
	I0401 11:27:49.875964   12872 system_pods.go:74] duration metric: took 157.0791ms to wait for pod list to return data ...
	I0401 11:27:49.875964   12872 default_sa.go:34] waiting for default service account to be created ...
	I0401 11:27:50.060269   12872 request.go:629] Waited for 184.3038ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/default/serviceaccounts
	I0401 11:27:50.060269   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/default/serviceaccounts
	I0401 11:27:50.060269   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:50.060269   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:50.060269   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:50.064649   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:50.065492   12872 default_sa.go:45] found service account: "default"
	I0401 11:27:50.065492   12872 default_sa.go:55] duration metric: took 189.5266ms for default service account to be created ...
	I0401 11:27:50.065492   12872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 11:27:50.248980   12872 request.go:629] Waited for 183.2936ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:50.249060   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:50.249060   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:50.249060   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:50.249060   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:50.258369   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:27:50.266999   12872 system_pods.go:86] 17 kube-system pods found
	I0401 11:27:50.266999   12872 system_pods.go:89] "coredns-76f75df574-4xvlf" [d2a6344b-f0f6-49a1-9135-2a2ae21228b9] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "coredns-76f75df574-vjslq" [81ef7e9b-acf1-411f-8f00-bb9fea08056f] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "etcd-ha-401500" [532eef29-0a6a-4b38-82a7-522c28eb8d64] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "etcd-ha-401500-m02" [258b489e-95c8-4bfc-931f-2392bd619257] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kindnet-92s2r" [5d6301b7-cb61-401f-9b6d-1a77775b65ac] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kindnet-v22wx" [86d50e2c-cb46-475b-9ec9-e16549903f65] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-apiserver-ha-401500" [bd79feb9-6db9-49ab-87ec-debf9556277f] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-apiserver-ha-401500-m02" [c092dcfe-f711-419d-b172-05670e1c4b53] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-controller-manager-ha-401500" [aa7dc05b-ee68-49fa-9a08-60e079f62848] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-controller-manager-ha-401500-m02" [2755a2be-c5d2-4df7-9572-f2bde8aa9314] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-proxy-28zds" [bb38f484-6c10-4874-a3a7-dba22c1720a0] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-proxy-hqcpv" [edf6bd75-05e1-479f-b190-13d867bb7ef5] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-scheduler-ha-401500" [d727c9ec-579a-4449-90b1-86b790573abb] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-scheduler-ha-401500-m02" [b38ecb47-0b33-4432-a060-67e352fc9d73] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-vip-ha-401500" [b1386d4f-d6ab-4cfd-91e4-39539d0e2854] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-vip-ha-401500-m02" [d5cc5b36-52ad-4da8-b75a-8cfce3b3391f] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "storage-provisioner" [373b3186-34e3-4ae2-8ddf-4701d665e768] Running
	I0401 11:27:50.266999   12872 system_pods.go:126] duration metric: took 201.5058ms to wait for k8s-apps to be running ...
	I0401 11:27:50.267527   12872 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 11:27:50.278810   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 11:27:50.308270   12872 system_svc.go:56] duration metric: took 40.7423ms WaitForService to wait for kubelet
	I0401 11:27:50.308360   12872 kubeadm.go:576] duration metric: took 13.8883284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:27:50.308360   12872 node_conditions.go:102] verifying NodePressure condition ...
	I0401 11:27:50.454420   12872 request.go:629] Waited for 145.8884ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes
	I0401 11:27:50.454790   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes
	I0401 11:27:50.454790   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:50.454790   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:50.454867   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:50.459106   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:50.461264   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:27:50.461353   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:27:50.461353   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:27:50.461353   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:27:50.461401   12872 node_conditions.go:105] duration metric: took 152.9923ms to run NodePressure ...
	I0401 11:27:50.461401   12872 start.go:240] waiting for startup goroutines ...
	I0401 11:27:50.461447   12872 start.go:254] writing updated cluster config ...
	I0401 11:27:50.464624   12872 out.go:177] 
	I0401 11:27:50.479371   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:27:50.479371   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:27:50.486271   12872 out.go:177] * Starting "ha-401500-m03" control-plane node in "ha-401500" cluster
	I0401 11:27:50.489441   12872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:27:50.489441   12872 cache.go:56] Caching tarball of preloaded images
	I0401 11:27:50.490136   12872 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:27:50.490314   12872 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:27:50.490480   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:27:50.495394   12872 start.go:360] acquireMachinesLock for ha-401500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:27:50.495394   12872 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-401500-m03"
	I0401 11:27:50.496054   12872 start.go:93] Provisioning new machine with config: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:27:50.496089   12872 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0401 11:27:50.497946   12872 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 11:27:50.498892   12872 start.go:159] libmachine.API.Create for "ha-401500" (driver="hyperv")
	I0401 11:27:50.498892   12872 client.go:168] LocalClient.Create starting
	I0401 11:27:50.498892   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 11:27:50.498892   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:27:50.498892   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:27:50.498892   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 11:27:50.499900   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:27:50.499900   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:27:50.499900   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 11:27:52.547405   12872 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 11:27:52.548425   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:27:52.548533   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 11:27:54.399368   12872 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 11:27:54.399368   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:27:54.400111   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:27:56.009894   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:27:56.010495   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:27:56.010573   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:27:59.964174   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:27:59.967124   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:27:59.968808   12872 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 11:28:00.481020   12872 main.go:141] libmachine: Creating SSH key...
	I0401 11:28:00.705339   12872 main.go:141] libmachine: Creating VM...
	I0401 11:28:00.705339   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:28:03.773115   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:28:03.773191   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:03.773294   12872 main.go:141] libmachine: Using switch "Default Switch"
	I0401 11:28:03.773362   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:28:05.656597   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:28:05.656597   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:05.657417   12872 main.go:141] libmachine: Creating VHD
	I0401 11:28:05.657417   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 11:28:09.602344   12872 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 59C07E18-6B93-4D43-AE0D-B8080CD51ED7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 11:28:09.603388   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:09.603388   12872 main.go:141] libmachine: Writing magic tar header
	I0401 11:28:09.603388   12872 main.go:141] libmachine: Writing SSH key tar header
	I0401 11:28:09.612931   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 11:28:12.924598   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:12.925654   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:12.925654   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\disk.vhd' -SizeBytes 20000MB
	I0401 11:28:15.564225   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:15.564490   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:15.564490   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-401500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0401 11:28:19.404527   12872 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-401500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 11:28:19.404527   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:19.405133   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-401500-m03 -DynamicMemoryEnabled $false
	I0401 11:28:21.797156   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:21.797156   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:21.797276   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-401500-m03 -Count 2
	I0401 11:28:24.099764   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:24.099764   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:24.099983   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-401500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\boot2docker.iso'
	I0401 11:28:26.857443   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:26.857443   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:26.857580   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-401500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\disk.vhd'
	I0401 11:28:29.666017   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:29.666229   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:29.666229   12872 main.go:141] libmachine: Starting VM...
	I0401 11:28:29.666309   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-401500-m03
	I0401 11:28:32.861287   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:32.861287   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:32.861287   12872 main.go:141] libmachine: Waiting for host to start...
	I0401 11:28:32.861287   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:35.283506   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:35.283506   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:35.284314   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:28:38.001661   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:38.001661   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:39.007282   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:41.377907   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:41.377907   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:41.378002   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:28:44.059304   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:44.060227   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:45.074063   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:47.418783   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:47.418783   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:47.418783   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:28:50.141252   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:50.141308   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:51.151912   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:53.476765   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:53.476765   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:53.476765   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:28:56.144640   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:56.144790   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:57.149948   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:59.497068   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:59.497169   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:59.497169   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:02.257353   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:02.257353   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:02.257353   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:04.503678   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:04.503678   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:04.504240   12872 machine.go:94] provisionDockerMachine start ...
	I0401 11:29:04.504380   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:06.845449   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:06.845449   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:06.845449   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:09.620167   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:09.620167   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:09.627761   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:09.627761   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:09.627761   12872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:29:09.749049   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 11:29:09.749049   12872 buildroot.go:166] provisioning hostname "ha-401500-m03"
	I0401 11:29:09.749049   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:12.053314   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:12.053405   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:12.053498   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:14.777512   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:14.777745   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:14.783706   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:14.784589   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:14.784589   12872 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401500-m03 && echo "ha-401500-m03" | sudo tee /etc/hostname
	I0401 11:29:14.935641   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401500-m03
	
	I0401 11:29:14.936194   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:17.215469   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:17.215663   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:17.215754   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:19.961506   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:19.961506   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:19.967928   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:19.968662   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:19.970766   12872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:29:20.119063   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:29:20.119640   12872 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:29:20.119640   12872 buildroot.go:174] setting up certificates
	I0401 11:29:20.119709   12872 provision.go:84] configureAuth start
	I0401 11:29:20.119779   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:22.397848   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:22.398077   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:22.398142   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:25.142348   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:25.142348   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:25.143234   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:27.481424   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:27.481881   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:27.482114   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:30.257381   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:30.257381   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:30.257457   12872 provision.go:143] copyHostCerts
	I0401 11:29:30.257634   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 11:29:30.257830   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:29:30.257830   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:29:30.257984   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:29:30.259695   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 11:29:30.259751   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:29:30.259751   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:29:30.260302   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:29:30.261251   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 11:29:30.261251   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:29:30.261251   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:29:30.261848   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:29:30.262994   12872 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-401500-m03 san=[127.0.0.1 172.19.145.208 ha-401500-m03 localhost minikube]
	I0401 11:29:30.435832   12872 provision.go:177] copyRemoteCerts
	I0401 11:29:30.446823   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:29:30.446823   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:32.770358   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:32.771357   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:32.771465   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:35.537563   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:35.537563   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:35.538822   12872 sshutil.go:53] new ssh client: &{IP:172.19.145.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\id_rsa Username:docker}
	I0401 11:29:35.655030   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.208139s)
	I0401 11:29:35.655030   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 11:29:35.655030   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:29:35.718584   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 11:29:35.718584   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 11:29:35.781610   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 11:29:35.783291   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 11:29:35.837915   12872 provision.go:87] duration metric: took 15.7180964s to configureAuth
	I0401 11:29:35.837915   12872 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:29:35.838636   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:29:35.838636   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:38.150526   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:38.150526   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:38.150526   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:40.888275   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:40.889287   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:40.898375   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:40.898375   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:40.898375   12872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:29:41.032689   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:29:41.032689   12872 buildroot.go:70] root file system type: tmpfs
	I0401 11:29:41.032909   12872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:29:41.033014   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:43.337930   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:43.337930   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:43.338416   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:46.101816   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:46.101816   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:46.108345   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:46.108345   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:46.108939   12872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.153.73"
	Environment="NO_PROXY=172.19.153.73,172.19.149.50"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:29:46.271025   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.153.73
	Environment=NO_PROXY=172.19.153.73,172.19.149.50
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:29:46.271362   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:48.553586   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:48.553586   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:48.553586   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:51.307547   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:51.307862   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:51.313085   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:51.313834   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:51.314064   12872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:29:53.537223   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 11:29:53.537403   12872 machine.go:97] duration metric: took 49.0328192s to provisionDockerMachine
	I0401 11:29:53.537460   12872 client.go:171] duration metric: took 2m3.0377063s to LocalClient.Create
	I0401 11:29:53.537460   12872 start.go:167] duration metric: took 2m3.0377063s to libmachine.API.Create "ha-401500"
	I0401 11:29:53.537522   12872 start.go:293] postStartSetup for "ha-401500-m03" (driver="hyperv")
	I0401 11:29:53.537584   12872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:29:53.551431   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:29:53.551431   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:55.824297   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:55.824297   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:55.824714   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:58.563462   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:58.563462   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:58.563462   12872 sshutil.go:53] new ssh client: &{IP:172.19.145.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\id_rsa Username:docker}
	I0401 11:29:58.675603   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1241359s)
	I0401 11:29:58.688880   12872 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:29:58.699081   12872 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:29:58.699187   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:29:58.699764   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:29:58.700887   12872 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:29:58.700887   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 11:29:58.713444   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 11:29:58.735113   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:29:58.783152   12872 start.go:296] duration metric: took 5.2455936s for postStartSetup
	I0401 11:29:58.786301   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:01.073341   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:01.073341   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:01.073341   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:03.841016   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:03.841268   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:03.841776   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:30:03.846438   12872 start.go:128] duration metric: took 2m13.349416s to createHost
	I0401 11:30:03.846561   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:06.163967   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:06.163967   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:06.163967   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:08.937930   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:08.937930   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:08.943770   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:30:08.945251   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:30:08.945251   12872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:30:09.075366   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711971009.075846302
	
	I0401 11:30:09.075366   12872 fix.go:216] guest clock: 1711971009.075846302
	I0401 11:30:09.075366   12872 fix.go:229] Guest: 2024-04-01 11:30:09.075846302 +0000 UTC Remote: 2024-04-01 11:30:03.8465619 +0000 UTC m=+594.076466301 (delta=5.229284402s)
	I0401 11:30:09.075366   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:11.360821   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:11.360919   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:11.360919   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:14.100935   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:14.100935   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:14.107770   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:30:14.108003   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:30:14.108003   12872 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711971009
	I0401 11:30:14.246088   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:30:09 UTC 2024
	
	I0401 11:30:14.246088   12872 fix.go:236] clock set: Mon Apr  1 11:30:09 UTC 2024
	 (err=<nil>)
	I0401 11:30:14.246088   12872 start.go:83] releasing machines lock for "ha-401500-m03", held for 2m23.749688s
	I0401 11:30:14.246088   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:16.517422   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:16.517924   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:16.518081   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:19.304823   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:19.304823   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:19.307598   12872 out.go:177] * Found network options:
	I0401 11:30:19.310427   12872 out.go:177]   - NO_PROXY=172.19.153.73,172.19.149.50
	W0401 11:30:19.312688   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:30:19.312688   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 11:30:19.315258   12872 out.go:177]   - NO_PROXY=172.19.153.73,172.19.149.50
	W0401 11:30:19.317923   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:30:19.317972   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:30:19.318630   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:30:19.319429   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 11:30:19.321879   12872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:30:19.321879   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:19.334359   12872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 11:30:19.334359   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:21.689774   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:21.689774   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:21.689774   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:21.691847   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:21.691847   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:21.691847   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:24.523877   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:24.523877   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:24.524539   12872 sshutil.go:53] new ssh client: &{IP:172.19.145.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\id_rsa Username:docker}
	I0401 11:30:24.580027   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:24.580234   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:24.580234   12872 sshutil.go:53] new ssh client: &{IP:172.19.145.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\id_rsa Username:docker}
	I0401 11:30:24.614348   12872 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.279952s)
	W0401 11:30:24.614348   12872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:30:24.628484   12872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:30:24.744975   12872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 11:30:24.744975   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:30:24.744975   12872 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4230581s)
	I0401 11:30:24.745281   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:30:24.798178   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:30:24.830185   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:30:24.850893   12872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:30:24.863487   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:30:24.897888   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:30:24.930762   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:30:24.964808   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:30:25.002124   12872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:30:25.035549   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:30:25.071946   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:30:25.110769   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:30:25.147030   12872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:30:25.180584   12872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:30:25.211723   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:25.435863   12872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:30:25.472379   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:30:25.484501   12872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:30:25.527599   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:30:25.565685   12872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:30:25.612698   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:30:25.655318   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:30:25.696324   12872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 11:30:25.761459   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:30:25.788917   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:30:25.841605   12872 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:30:25.861586   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:30:25.882565   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:30:25.935639   12872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:30:26.176613   12872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:30:26.382512   12872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:30:26.382512   12872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:30:26.429264   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:26.662808   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:30:29.258871   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.595968s)
	I0401 11:30:29.270516   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 11:30:29.311357   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:30:29.353931   12872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 11:30:29.583785   12872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 11:30:29.798023   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:30.021194   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 11:30:30.065959   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:30:30.106615   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:30.329857   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 11:30:30.447915   12872 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 11:30:30.460899   12872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 11:30:30.471316   12872 start.go:562] Will wait 60s for crictl version
	I0401 11:30:30.484141   12872 ssh_runner.go:195] Run: which crictl
	I0401 11:30:30.504876   12872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 11:30:30.582381   12872 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 11:30:30.595632   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:30:30.644570   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:30:30.683526   12872 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 11:30:30.686207   12872 out.go:177]   - env NO_PROXY=172.19.153.73
	I0401 11:30:30.689225   12872 out.go:177]   - env NO_PROXY=172.19.153.73,172.19.149.50
	I0401 11:30:30.693259   12872 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 11:30:30.697916   12872 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 11:30:30.698059   12872 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 11:30:30.698059   12872 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 11:30:30.698120   12872 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 11:30:30.701194   12872 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 11:30:30.701286   12872 ip.go:210] interface addr: 172.19.144.1/20
	I0401 11:30:30.715311   12872 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 11:30:30.722311   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:30:30.749517   12872 mustload.go:65] Loading cluster: ha-401500
	I0401 11:30:30.750236   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:30:30.750296   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:30:33.029342   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:33.029342   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:33.029531   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:30:33.030406   12872 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500 for IP: 172.19.145.208
	I0401 11:30:33.030406   12872 certs.go:194] generating shared ca certs ...
	I0401 11:30:33.030406   12872 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:30:33.030782   12872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 11:30:33.031326   12872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 11:30:33.031749   12872 certs.go:256] generating profile certs ...
	I0401 11:30:33.032475   12872 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key
	I0401 11:30:33.032475   12872 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.5dc6fcf3
	I0401 11:30:33.032805   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.5dc6fcf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.153.73 172.19.149.50 172.19.145.208 172.19.159.254]
	I0401 11:30:33.276382   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.5dc6fcf3 ...
	I0401 11:30:33.276382   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.5dc6fcf3: {Name:mk8c1cd265a28e5c2f46bc1d0572e38b2720cd15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:30:33.277831   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.5dc6fcf3 ...
	I0401 11:30:33.277831   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.5dc6fcf3: {Name:mk872163206b05ddb67d4c6d7376093c276d23b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:30:33.278492   12872 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.5dc6fcf3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt
	I0401 11:30:33.291161   12872 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.5dc6fcf3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key
	I0401 11:30:33.293324   12872 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key
	I0401 11:30:33.293385   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 11:30:33.293718   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0401 11:30:33.294067   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 11:30:33.294218   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 11:30:33.294402   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 11:30:33.294596   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 11:30:33.294799   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 11:30:33.294799   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 11:30:33.295427   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem (1338 bytes)
	W0401 11:30:33.295718   12872 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260_empty.pem, impossibly tiny 0 bytes
	I0401 11:30:33.295912   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 11:30:33.296250   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 11:30:33.296573   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 11:30:33.296675   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 11:30:33.297207   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem (1708 bytes)
	I0401 11:30:33.297265   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /usr/share/ca-certificates/12602.pem
	I0401 11:30:33.297265   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:30:33.297812   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem -> /usr/share/ca-certificates/1260.pem
	I0401 11:30:33.298034   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:30:35.630288   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:35.630288   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:35.630648   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:38.386890   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:30:38.386890   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:38.388113   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:30:38.488532   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0401 11:30:38.497495   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0401 11:30:38.534315   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0401 11:30:38.542104   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0401 11:30:38.579361   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0401 11:30:38.587456   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0401 11:30:38.629590   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0401 11:30:38.637439   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0401 11:30:38.677586   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0401 11:30:38.685775   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0401 11:30:38.726786   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0401 11:30:38.735391   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0401 11:30:38.760875   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 11:30:38.818003   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 11:30:38.872299   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 11:30:38.924420   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 11:30:38.975369   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0401 11:30:39.029400   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 11:30:39.083007   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 11:30:39.134000   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 11:30:39.187056   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /usr/share/ca-certificates/12602.pem (1708 bytes)
	I0401 11:30:39.242386   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 11:30:39.292920   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem --> /usr/share/ca-certificates/1260.pem (1338 bytes)
	I0401 11:30:39.359184   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0401 11:30:39.396188   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0401 11:30:39.430262   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0401 11:30:39.466257   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0401 11:30:39.503180   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0401 11:30:39.537744   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0401 11:30:39.573588   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0401 11:30:39.624512   12872 ssh_runner.go:195] Run: openssl version
	I0401 11:30:39.647953   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0401 11:30:39.684138   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0401 11:30:39.692258   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 11:30:39.707273   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0401 11:30:39.730763   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 11:30:39.766586   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 11:30:39.804161   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:30:39.813131   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:30:39.831406   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:30:39.855372   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 11:30:39.887412   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1260.pem && ln -fs /usr/share/ca-certificates/1260.pem /etc/ssl/certs/1260.pem"
	I0401 11:30:39.928396   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1260.pem
	I0401 11:30:39.936549   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 11:30:39.951393   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1260.pem
	I0401 11:30:39.979352   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1260.pem /etc/ssl/certs/51391683.0"
	I0401 11:30:40.018684   12872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 11:30:40.026055   12872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 11:30:40.026402   12872 kubeadm.go:928] updating node {m03 172.19.145.208 8443 v1.29.3 docker true true} ...
	I0401 11:30:40.026574   12872 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-401500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.145.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 11:30:40.026639   12872 kube-vip.go:111] generating kube-vip config ...
	I0401 11:30:40.040791   12872 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 11:30:40.070094   12872 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 11:30:40.070094   12872 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 11:30:40.084930   12872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 11:30:40.103559   12872 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0401 11:30:40.117318   12872 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0401 11:30:40.137308   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0401 11:30:40.137308   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0401 11:30:40.137308   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0401 11:30:40.137308   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 11:30:40.137308   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 11:30:40.153000   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 11:30:40.154156   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 11:30:40.156169   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 11:30:40.177128   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 11:30:40.177195   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0401 11:30:40.177195   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0401 11:30:40.177195   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0401 11:30:40.177195   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0401 11:30:40.194244   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 11:30:40.286223   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0401 11:30:40.286312   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0401 11:30:41.757209   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0401 11:30:41.777889   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0401 11:30:41.814235   12872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 11:30:41.852143   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 11:30:41.896910   12872 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0401 11:30:41.903965   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:30:41.942157   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:42.166468   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:30:42.207153   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:30:42.208101   12872 start.go:316] joinCluster: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.19.145.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:30:42.208182   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0401 11:30:42.208379   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:30:44.466967   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:44.466967   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:44.467734   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:47.245121   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:30:47.245186   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:47.245445   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:30:47.450439   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2422201s)
	I0401 11:30:47.450439   12872 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.19.145.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:30:47.450439   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xpgd5p.3hmotncbc7b1c956 --discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-401500-m03 --control-plane --apiserver-advertise-address=172.19.145.208 --apiserver-bind-port=8443"
	I0401 11:31:44.747700   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xpgd5p.3hmotncbc7b1c956 --discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-401500-m03 --control-plane --apiserver-advertise-address=172.19.145.208 --apiserver-bind-port=8443": (57.2968606s)
	I0401 11:31:44.747700   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0401 11:31:45.504229   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-401500-m03 minikube.k8s.io/updated_at=2024_04_01T11_31_45_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=ha-401500 minikube.k8s.io/primary=false
	I0401 11:31:45.692841   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-401500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0401 11:31:45.890412   12872 start.go:318] duration metric: took 1m3.6816429s to joinCluster
	I0401 11:31:45.890528   12872 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.19.145.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:31:45.891284   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:31:45.894837   12872 out.go:177] * Verifying Kubernetes components...
	I0401 11:31:45.911938   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:31:46.283950   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:31:46.322524   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:31:46.323226   12872 kapi.go:59] client config for ha-401500: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x236fd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0401 11:31:46.323226   12872 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.159.254:8443 with https://172.19.153.73:8443
	I0401 11:31:46.326442   12872 node_ready.go:35] waiting up to 6m0s for node "ha-401500-m03" to be "Ready" ...
	I0401 11:31:46.326442   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:46.326442   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.326442   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.326442   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.340746   12872 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0401 11:31:46.342206   12872 node_ready.go:49] node "ha-401500-m03" has status "Ready":"True"
	I0401 11:31:46.342285   12872 node_ready.go:38] duration metric: took 15.8434ms for node "ha-401500-m03" to be "Ready" ...
	I0401 11:31:46.342285   12872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:31:46.342475   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:31:46.342501   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.342501   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.342501   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.361730   12872 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0401 11:31:46.372039   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4xvlf" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.372039   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4xvlf
	I0401 11:31:46.372039   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.372039   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.372039   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.383278   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0401 11:31:46.385316   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:31:46.385398   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.385398   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.385398   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.408286   12872 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0401 11:31:46.409401   12872 pod_ready.go:92] pod "coredns-76f75df574-4xvlf" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:46.409401   12872 pod_ready.go:81] duration metric: took 37.361ms for pod "coredns-76f75df574-4xvlf" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.409401   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vjslq" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.409924   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vjslq
	I0401 11:31:46.409924   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.409924   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.409924   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.414514   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:46.415563   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:31:46.415563   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.415563   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.415563   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.421494   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:46.422195   12872 pod_ready.go:92] pod "coredns-76f75df574-vjslq" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:46.422232   12872 pod_ready.go:81] duration metric: took 12.8315ms for pod "coredns-76f75df574-vjslq" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.422283   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.422724   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500
	I0401 11:31:46.422724   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.422724   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.422724   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.431725   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:31:46.432397   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:31:46.432397   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.432397   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.432397   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.436927   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:46.437253   12872 pod_ready.go:92] pod "etcd-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:46.437253   12872 pod_ready.go:81] duration metric: took 14.6151ms for pod "etcd-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.437253   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.437253   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:31:46.437833   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.437877   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.437877   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.441021   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:46.442133   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:31:46.442133   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.442133   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.443339   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.447469   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:46.448876   12872 pod_ready.go:92] pod "etcd-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:46.448943   12872 pod_ready.go:81] duration metric: took 11.6891ms for pod "etcd-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.448943   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.528708   12872 request.go:629] Waited for 79.4213ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:46.529022   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:46.529022   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.529022   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.529126   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.537096   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:31:46.735050   12872 request.go:629] Waited for 196.9286ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:46.735265   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:46.735458   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.735458   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.735458   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.741221   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:46.957627   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:46.957708   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.957708   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.957708   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.962878   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:47.130326   12872 request.go:629] Waited for 165.9441ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.130426   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.130636   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.130767   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.130767   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.135889   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:47.457030   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:47.457113   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.457172   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.457172   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.462831   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:47.534173   12872 request.go:629] Waited for 69.8074ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.534364   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.534364   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.534468   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.534468   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.541019   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:47.956206   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:47.956206   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.956206   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.956206   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.961639   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:47.962989   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.962989   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.962989   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.962989   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.967334   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:48.457648   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:48.457648   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:48.457648   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:48.457648   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:48.462766   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:48.463427   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:48.463427   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:48.463427   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:48.463427   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:48.468064   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:48.469239   12872 pod_ready.go:102] pod "etcd-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:48.961515   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:48.961515   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:48.961515   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:48.961515   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:48.967037   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:48.968740   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:48.968740   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:48.968740   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:48.968740   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:48.972587   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:49.463543   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:49.463543   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:49.463543   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:49.463543   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:49.469087   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:49.470826   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:49.470908   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:49.470908   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:49.470908   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:49.474762   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:49.962298   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:49.962387   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:49.962387   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:49.962387   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:49.967802   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:49.968847   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:49.973169   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:49.973169   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:49.973169   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:49.978209   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:50.459299   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:50.459518   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.459601   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.459601   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.465456   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:50.466541   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:50.466644   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.466644   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.466784   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.472094   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:50.472703   12872 pod_ready.go:92] pod "etcd-ha-401500-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:50.472703   12872 pod_ready.go:81] duration metric: took 4.0237322s for pod "etcd-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.472703   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.472703   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500
	I0401 11:31:50.472703   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.472703   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.472703   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.477502   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:50.478690   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:31:50.478745   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.478745   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.478745   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.482967   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:50.485701   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:50.485733   12872 pod_ready.go:81] duration metric: took 13.0296ms for pod "kube-apiserver-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.485733   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.537774   12872 request.go:629] Waited for 51.9597ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:31:50.538010   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:31:50.538010   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.538010   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.538210   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.556930   12872 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0401 11:31:50.741953   12872 request.go:629] Waited for 183.7751ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:31:50.742262   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:31:50.742262   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.742326   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.742326   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.748011   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:50.748773   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:50.748773   12872 pod_ready.go:81] duration metric: took 263.0383ms for pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.748773   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.931157   12872 request.go:629] Waited for 182.2207ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:50.931430   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:50.931430   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.931430   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.931536   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.938030   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:51.135179   12872 request.go:629] Waited for 195.9234ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.135179   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.135322   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.135322   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.135322   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.143473   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:31:51.340865   12872 request.go:629] Waited for 77.7315ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:51.341102   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:51.341146   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.341172   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.341241   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.346671   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:51.528179   12872 request.go:629] Waited for 179.5784ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.528361   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.528361   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.528361   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.528361   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.535141   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:51.764099   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:51.764160   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.764160   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.764160   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.768949   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:51.934067   12872 request.go:629] Waited for 163.2675ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.934498   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.934498   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.934498   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.934610   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.940535   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:52.263170   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:52.263170   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:52.263170   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:52.263170   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:52.279706   12872 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0401 11:31:52.340448   12872 request.go:629] Waited for 58.2764ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:52.340577   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:52.340577   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:52.340577   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:52.340641   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:52.349906   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:31:52.750925   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:52.750925   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:52.751001   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:52.751064   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:52.755449   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:52.756846   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:52.756846   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:52.756846   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:52.756846   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:52.760475   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:52.762286   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:53.260446   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:53.260446   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:53.260446   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:53.260446   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:53.266279   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:53.267730   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:53.267806   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:53.267806   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:53.267806   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:53.272139   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:53.750365   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:53.750431   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:53.750431   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:53.750431   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:53.759792   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:31:53.761492   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:53.761492   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:53.761492   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:53.761492   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:53.766346   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:54.249071   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:54.249071   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:54.249071   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:54.249071   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:54.254701   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:54.256571   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:54.256671   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:54.256671   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:54.256671   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:54.260790   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:54.753065   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:54.753172   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:54.753172   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:54.753172   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:54.758677   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:54.759905   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:54.760448   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:54.760448   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:54.760448   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:54.764677   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:54.765588   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:55.255504   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:55.255504   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:55.255646   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:55.255646   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:55.261034   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:55.262924   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:55.262924   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:55.262924   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:55.262924   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:55.270421   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:31:55.754536   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:55.754536   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:55.754536   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:55.754536   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:55.760864   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:55.763042   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:55.763042   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:55.763042   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:55.763042   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:55.767382   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:56.251981   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:56.252268   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:56.252268   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:56.252487   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:56.258426   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:56.259065   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:56.259065   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:56.259065   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:56.259065   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:56.268750   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:31:56.753166   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:56.753270   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:56.753270   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:56.753270   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:56.761595   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:31:56.763518   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:56.763584   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:56.763584   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:56.763584   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:56.768753   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:56.768949   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:57.264177   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:57.264281   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:57.264281   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:57.264281   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:57.270726   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:57.271824   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:57.271824   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:57.271824   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:57.271824   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:57.280448   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:31:57.763345   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:57.763345   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:57.763345   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:57.763345   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:57.769110   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:57.770576   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:57.770576   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:57.770576   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:57.770576   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:57.778082   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:31:58.264098   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:58.264283   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:58.264283   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:58.264283   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:58.269964   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:58.271256   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:58.271256   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:58.271256   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:58.271256   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:58.275853   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:58.760704   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:58.760704   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:58.760704   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:58.760704   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:58.765762   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:58.767089   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:58.767160   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:58.767160   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:58.767160   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:58.771741   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:58.772863   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:59.263254   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:59.263254   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:59.263254   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:59.263254   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:59.268733   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:59.270513   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:59.270513   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:59.270513   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:59.270513   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:59.276390   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:59.759685   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:59.759763   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:59.759763   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:59.759839   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:59.765062   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:59.766426   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:59.766426   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:59.766486   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:59.766486   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:59.772628   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:00.257940   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:00.257940   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:00.257940   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:00.258071   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:00.262795   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:00.264354   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:00.264410   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:00.264410   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:00.264410   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:00.267723   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:00.756645   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:00.756767   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:00.756767   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:00.756767   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:00.761222   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:00.763633   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:00.763633   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:00.763633   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:00.763633   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:00.768238   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:01.258169   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:01.258169   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:01.258169   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:01.258169   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:01.265909   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:01.267104   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:01.267104   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:01.267104   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:01.267104   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:01.272724   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:01.273297   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:01.760219   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:01.760219   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:01.760219   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:01.760219   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:01.765936   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:01.768305   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:01.768305   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:01.768305   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:01.768376   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:01.777348   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:02.262823   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:02.262823   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:02.262899   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:02.262899   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:02.268250   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:02.270115   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:02.270161   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:02.270161   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:02.270161   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:02.275109   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:02.761714   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:02.761982   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:02.761982   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:02.761982   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:02.766588   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:02.768814   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:02.768814   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:02.768884   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:02.768884   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:02.774191   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:03.262612   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:03.262612   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:03.262612   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:03.262724   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:03.268026   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:03.269335   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:03.269534   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:03.269534   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:03.269617   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:03.274374   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:03.274374   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:03.749805   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:03.749805   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:03.749884   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:03.749884   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:03.755695   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:03.757557   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:03.757557   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:03.757657   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:03.757657   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:03.761831   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:04.252201   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:04.252201   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:04.252201   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:04.252201   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:04.261576   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:32:04.262940   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:04.263038   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:04.263038   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:04.263038   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:04.270471   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:04.754894   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:04.755017   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:04.755017   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:04.755017   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:04.760776   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:04.762073   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:04.762198   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:04.762198   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:04.762198   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:04.766868   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:05.262942   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:05.262942   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:05.262942   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:05.262942   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:05.268406   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:05.269335   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:05.269582   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:05.269582   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:05.269582   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:05.274035   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:05.275500   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:05.752450   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:05.752531   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:05.752531   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:05.752531   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:05.757571   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:05.759938   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:05.759938   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:05.759938   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:05.759938   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:05.766699   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:06.256107   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:06.256107   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:06.256107   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:06.256107   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:06.260784   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:06.262908   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:06.262908   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:06.262908   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:06.262908   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:06.266227   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:06.758078   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:06.758078   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:06.758078   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:06.758078   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:06.763605   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:06.764920   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:06.764979   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:06.764979   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:06.764979   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:06.769209   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:07.259553   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:07.259553   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:07.259641   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:07.259641   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:07.265238   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:07.266021   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:07.266021   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:07.266021   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:07.266021   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:07.271438   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:07.756532   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:07.756532   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:07.756689   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:07.756689   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:07.762542   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:07.764434   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:07.764434   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:07.764434   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:07.764434   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:07.769024   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:07.769906   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:08.255038   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:08.255038   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:08.255038   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:08.255038   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:08.261644   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:08.263248   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:08.263316   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:08.263316   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:08.263316   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:08.268580   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:08.756878   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:08.756878   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:08.756878   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:08.756878   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:08.761225   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:08.763125   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:08.763125   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:08.763228   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:08.763228   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:08.767441   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:09.262129   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:09.262129   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:09.262450   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:09.262450   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:09.268290   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:09.269373   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:09.269428   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:09.269428   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:09.269428   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:09.273304   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:09.758004   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:09.758004   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:09.758004   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:09.758004   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:09.767104   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:32:09.768115   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:09.768115   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:09.768115   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:09.768115   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:09.774157   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:09.775457   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:10.260384   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:10.260384   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:10.260384   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:10.260384   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:10.265984   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:10.267109   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:10.267199   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:10.267199   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:10.267199   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:10.271257   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:10.762987   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:10.763091   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:10.763091   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:10.763091   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:10.768681   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:10.770606   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:10.770606   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:10.770606   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:10.770606   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:10.781259   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:32:11.263963   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:11.263963   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:11.263963   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:11.263963   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:11.268382   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:11.270206   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:11.270206   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:11.270277   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:11.270277   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:11.274996   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:11.763256   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:11.763329   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:11.763329   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:11.763329   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:11.768128   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:11.770104   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:11.770104   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:11.770104   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:11.770104   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:11.775200   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:11.776034   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:12.250756   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:12.250756   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:12.250969   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:12.250969   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:12.257060   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:12.259096   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:12.259096   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:12.259096   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:12.259096   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:12.263248   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:12.751237   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:12.751367   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:12.751367   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:12.751367   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:12.777011   12872 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0401 11:32:12.778242   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:12.778242   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:12.778389   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:12.778389   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:12.787068   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:13.263571   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:13.263647   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:13.263647   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:13.263647   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:13.269018   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:13.270358   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:13.270358   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:13.270358   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:13.270358   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:13.274622   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:13.749647   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:13.749962   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:13.749962   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:13.750061   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:13.758305   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:13.759062   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:13.759062   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:13.759062   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:13.759062   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:13.763799   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:14.264129   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:14.264129   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:14.264129   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:14.264129   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:14.269370   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:14.271032   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:14.271032   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:14.271032   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:14.271032   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:14.274793   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:14.276164   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:14.749852   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:14.750053   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:14.750053   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:14.750053   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:14.755689   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:14.756763   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:14.756856   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:14.756856   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:14.756856   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:14.763833   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:15.253094   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:15.253153   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:15.253153   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:15.253153   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:15.258084   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:15.259100   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:15.259100   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:15.259100   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:15.259100   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:15.270475   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0401 11:32:15.754346   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:15.754416   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:15.754416   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:15.754416   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:15.763143   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:15.765147   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:15.765211   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:15.765269   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:15.765269   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:15.771879   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:16.254141   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:16.254141   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:16.254141   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:16.254141   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:16.258802   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:16.258802   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:16.258802   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:16.258802   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:16.258802   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:16.265792   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:16.757434   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:16.757434   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:16.757434   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:16.757434   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:16.763052   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:16.765268   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:16.765268   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:16.765373   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:16.765373   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:16.769663   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:16.770653   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:17.258568   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:17.258568   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:17.258568   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:17.258568   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:17.264206   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:17.265421   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:17.265504   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:17.265504   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:17.265504   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:17.270822   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:17.758997   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:17.759067   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:17.759067   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:17.759067   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:17.764398   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:17.766041   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:17.766041   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:17.766100   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:17.766100   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:17.770380   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:18.257515   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:18.257515   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:18.257515   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:18.257684   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:18.293622   12872 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0401 11:32:18.294614   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:18.294614   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:18.294614   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:18.294614   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:18.301901   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:18.757155   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:18.757155   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:18.757155   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:18.757155   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:18.762470   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:18.764357   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:18.764415   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:18.764415   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:18.764490   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:18.769884   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:18.770990   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:19.261112   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:19.261183   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:19.261183   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:19.261183   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:19.269734   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:19.271126   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:19.271126   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:19.271126   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:19.271126   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:19.275273   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:19.756129   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:19.756129   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:19.756129   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:19.756129   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:19.761493   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:19.762550   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:19.762550   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:19.762550   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:19.762550   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:19.766508   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:20.259141   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:20.259141   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:20.259141   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:20.259230   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:20.264386   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:20.265509   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:20.265509   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:20.265509   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:20.265509   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:20.270709   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:20.755610   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:20.755677   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:20.755677   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:20.755677   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:20.761016   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:20.762758   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:20.762758   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:20.762758   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:20.762758   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:20.768382   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:21.259197   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:21.259271   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:21.259271   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:21.259271   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:21.269351   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:32:21.270403   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:21.270403   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:21.270403   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:21.270403   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:21.275007   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:21.276003   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:21.759927   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:21.759927   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:21.759927   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:21.759927   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:21.765525   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:21.767410   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:21.767410   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:21.767410   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:21.767410   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:21.772695   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:22.258788   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:22.259038   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:22.259038   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:22.259038   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:22.264233   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:22.266348   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:22.266397   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:22.266397   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:22.266397   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:22.271101   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:22.756788   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:22.756874   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:22.756874   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:22.756874   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:22.761436   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:22.763593   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:22.763593   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:22.763593   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:22.763593   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:22.768402   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:23.253807   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:23.253807   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:23.253807   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:23.253807   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:23.259720   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:23.261185   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:23.261266   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:23.261266   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:23.261336   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:23.269751   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:23.757792   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:23.757792   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:23.757856   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:23.757856   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:23.765851   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:23.766440   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:23.766440   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:23.766440   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:23.766440   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:23.771997   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:23.771997   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:24.263668   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:24.263752   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:24.263752   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:24.263752   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:24.269679   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:24.270680   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:24.270680   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:24.270680   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:24.270680   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:24.282705   12872 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0401 11:32:24.753929   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:24.753929   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:24.754027   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:24.754027   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:24.757838   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:24.759602   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:24.760192   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:24.760192   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:24.760192   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:24.764564   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:25.260327   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:25.260383   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:25.260383   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:25.260383   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:25.265874   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:25.267133   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:25.267133   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:25.267133   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:25.267198   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:25.270936   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:25.762623   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:25.762728   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:25.762728   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:25.762728   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:25.768306   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:25.769789   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:25.769847   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:25.769847   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:25.769847   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:25.774238   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:25.775074   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:26.263943   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:26.263943   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:26.264177   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:26.264177   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:26.269508   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:26.270702   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:26.270780   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:26.270780   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:26.270780   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:26.276109   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:26.750999   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:26.751321   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:26.751321   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:26.751321   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:26.756111   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:26.757242   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:26.757318   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:26.757318   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:26.757318   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:26.761899   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:27.250866   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:27.250866   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:27.250866   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:27.250973   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:27.256293   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:27.258409   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:27.258543   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:27.258543   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:27.258543   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:27.262846   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:27.762962   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:27.762962   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:27.762962   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:27.762962   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:27.768140   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:27.769584   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:27.769584   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:27.769584   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:27.769584   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:27.774750   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:27.775875   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:28.250469   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:28.250587   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:28.250587   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:28.250587   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:28.256680   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:28.258306   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:28.258365   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:28.258365   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:28.258365   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:28.262270   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:28.751857   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:28.751939   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:28.751939   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:28.751939   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:28.758182   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:28.759437   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:28.759563   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:28.759563   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:28.759563   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:28.763801   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:29.259472   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:29.259472   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:29.259472   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:29.259472   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:29.265470   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:29.266396   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:29.266396   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:29.266473   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:29.266473   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:29.270620   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:29.757879   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:29.757958   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:29.757958   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:29.757958   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:29.763990   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:29.765229   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:29.765229   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:29.765229   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:29.765229   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:29.769782   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:30.259850   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:30.259850   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:30.259850   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:30.259944   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:30.265231   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:30.267155   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:30.267155   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:30.267155   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:30.267155   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:30.271792   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:30.272124   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:30.757080   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:30.757195   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:30.757195   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:30.757195   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:30.762137   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:30.763547   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:30.763650   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:30.763650   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:30.763650   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:30.768932   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:31.259726   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:31.259726   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:31.260015   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:31.260015   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:31.270596   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:32:31.272545   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:31.272661   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:31.272661   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:31.272661   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:31.277925   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:31.758211   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:31.758211   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:31.758211   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:31.758211   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:31.782687   12872 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0401 11:32:31.785025   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:31.785096   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:31.785118   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:31.785118   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:31.798965   12872 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0401 11:32:32.261307   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:32.261307   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:32.261307   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:32.261307   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:32.267923   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:32.269641   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:32.269641   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:32.269773   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:32.269773   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:32.273203   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:32.274664   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:32.749773   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:32.749883   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:32.749948   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:32.749948   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:32.757419   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:32.758923   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:32.758923   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:32.758923   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:32.759031   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:32.766189   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:33.249897   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:33.249964   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:33.249964   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:33.249964   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:33.253686   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:33.256081   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:33.256081   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:33.256138   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:33.256138   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:33.261201   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:33.764528   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:33.764793   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:33.764793   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:33.764793   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:33.770328   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:33.772417   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:33.772417   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:33.772500   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:33.772500   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:33.778953   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:34.249713   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:34.249771   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:34.249771   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:34.249771   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:34.256360   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:34.257122   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:34.257122   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:34.257122   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:34.257122   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:34.262460   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:34.765090   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:34.765090   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:34.765173   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:34.765173   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:34.769495   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:34.770838   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:34.770838   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:34.770838   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:34.771371   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:34.782518   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0401 11:32:34.784433   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:35.252221   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:35.252221   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:35.252367   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:35.252367   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:35.258103   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:35.259416   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:35.259483   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:35.259483   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:35.259483   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:35.264736   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:35.753272   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:35.753351   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:35.753412   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:35.753412   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:35.759853   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:35.761184   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:35.761272   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:35.761272   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:35.761331   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:35.765926   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:36.251681   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:36.251681   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:36.251681   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:36.251681   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:36.258169   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:36.259313   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:36.259313   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:36.259313   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:36.259313   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:36.260723   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 11:32:36.754119   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:36.754119   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:36.754186   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:36.754186   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:36.760085   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:36.761868   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:36.761966   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:36.761966   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:36.761966   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:36.766854   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:37.255294   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:37.255294   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:37.255554   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:37.255554   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:37.260487   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:37.262065   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:37.262141   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:37.262141   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:37.262141   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:37.267224   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:37.268381   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:37.758167   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:37.758167   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:37.758167   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:37.758167   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:37.763749   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:37.766052   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:37.766113   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:37.766113   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:37.766113   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:37.769381   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:38.255056   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:38.255230   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:38.255230   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:38.255230   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:38.260340   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:38.261851   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:38.261851   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:38.261851   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:38.261851   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:38.266838   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:38.757616   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:38.757616   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:38.757616   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:38.757616   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:38.762188   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:38.764508   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:38.764508   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:38.764508   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:38.764508   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:38.769101   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:39.262737   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:39.262858   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:39.262858   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:39.262858   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:39.270257   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:39.271783   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:39.271839   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:39.271839   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:39.271839   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:39.276647   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:39.278517   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:39.762932   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:39.762932   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:39.762932   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:39.762932   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:39.769561   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:39.770649   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:39.770805   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:39.770805   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:39.770805   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:39.776384   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:40.265228   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:40.265228   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:40.265228   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:40.265228   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:40.269608   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:40.271207   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:40.271207   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:40.271207   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:40.271207   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:40.278736   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:40.764518   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:40.764616   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:40.764616   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:40.764690   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:40.770694   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:40.771775   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:40.771775   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:40.771775   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:40.771888   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:40.794130   12872 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0401 11:32:41.253602   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:41.253677   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:41.253747   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:41.253747   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:41.258363   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:41.259347   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:41.259347   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:41.259347   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:41.259347   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:41.265678   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:41.754483   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:41.754483   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:41.754483   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:41.754483   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:41.758482   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:41.760315   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:41.760315   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:41.760315   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:41.760315   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:41.773011   12872 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0401 11:32:41.773629   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:42.257893   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:42.257893   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:42.257893   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:42.257893   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:42.263800   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:42.264820   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:42.264820   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:42.264820   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:42.264820   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:42.268838   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:42.749895   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:42.749956   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:42.750013   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:42.750013   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:42.753473   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:42.755470   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:42.755470   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:42.755470   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:42.755470   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:42.763474   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:43.252435   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:43.252435   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:43.252435   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:43.252435   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:43.258041   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:43.259407   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:43.259461   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:43.259461   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:43.259461   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:43.263438   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:43.758515   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:43.758515   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:43.758515   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:43.758515   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:43.764505   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:43.768404   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:43.768509   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:43.768509   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:43.768570   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:43.775355   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:43.776698   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:44.250292   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:44.250292   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:44.250292   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:44.250292   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:44.256710   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:44.257745   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:44.257745   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:44.257745   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:44.257745   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:44.262584   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:44.757546   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:44.757546   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:44.757546   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:44.757546   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:44.762224   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:44.763062   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:44.763062   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:44.763062   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:44.763062   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:44.766769   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:45.256728   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:45.257074   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:45.257074   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:45.257074   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:45.262150   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:45.264062   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:45.264062   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:45.264062   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:45.264062   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:45.268521   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:45.756449   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:45.756575   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:45.756575   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:45.756575   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:45.761545   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:45.763458   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:45.763458   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:45.763458   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:45.763458   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:45.768425   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:46.263925   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:46.264186   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:46.264385   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:46.264416   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:46.269386   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:46.270387   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:46.270387   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:46.270387   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:46.270387   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:46.277682   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:46.279162   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:46.762450   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:46.762450   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:46.762450   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:46.762534   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:46.767679   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:46.768984   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:46.769042   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:46.769042   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:46.769042   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:46.775266   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:47.263064   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:47.263365   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:47.263365   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:47.263365   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:47.268103   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:47.269761   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:47.269878   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:47.269878   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:47.269878   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:47.274625   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:47.763866   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:47.763866   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:47.763866   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:47.763866   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:47.769524   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:47.771458   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:47.771458   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:47.771458   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:47.771458   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:47.777933   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:48.264190   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:48.264434   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:48.264434   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:48.264434   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:48.269960   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:48.271100   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:48.271100   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:48.271100   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:48.271100   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:48.274739   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:48.749585   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:48.749655   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:48.749655   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:48.749655   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:48.756025   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:48.757957   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:48.757957   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:48.757957   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:48.757957   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:48.761607   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:48.762785   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:49.251340   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:49.251417   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:49.251417   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:49.251417   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:49.256792   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:49.259290   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:49.259290   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:49.259290   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:49.259290   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:49.265864   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:49.750232   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:49.750382   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:49.750382   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:49.750382   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:49.755751   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:49.757413   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:49.757469   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:49.757469   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:49.757469   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:49.762039   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:50.251443   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:50.251443   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:50.251443   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:50.251443   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:50.256534   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:50.258209   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:50.258209   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:50.258209   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:50.258209   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:50.262983   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:50.750520   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:50.750520   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:50.750520   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:50.750520   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:50.759164   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:50.761133   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:50.761257   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:50.761448   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:50.761539   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:50.765484   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:50.766070   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:51.254043   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:51.254103   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:51.254161   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:51.254161   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:51.260915   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:51.262362   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:51.262431   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:51.262431   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:51.262431   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:51.268045   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:51.755238   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:51.755238   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:51.755238   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:51.755238   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:51.762096   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:51.763880   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:51.764083   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:51.764083   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:51.764083   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:51.769102   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:52.258952   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:52.259143   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:52.259143   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:52.259143   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:52.265564   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:52.266393   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:52.266937   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:52.266937   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:52.266937   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:52.271771   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:52.758508   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:52.758508   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:52.758508   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:52.758508   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:52.764252   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:52.765648   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:52.765704   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:52.765704   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:52.765704   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:52.769972   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:52.771077   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:53.249644   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:53.249866   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:53.249866   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:53.249866   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:53.255286   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:53.257162   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:53.257162   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:53.257162   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:53.257162   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:53.261405   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:53.763103   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:53.763103   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:53.763103   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:53.763103   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:53.768808   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:53.769860   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:53.769860   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:53.769860   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:53.769860   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:53.774574   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:54.249610   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:54.249610   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:54.249610   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:54.249610   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:54.255213   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:54.257695   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:54.257761   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:54.257761   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:54.257761   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:54.261880   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:54.763740   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:54.764021   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:54.764021   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:54.764021   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:54.768875   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:54.770392   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:54.770963   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:54.770963   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:54.770963   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:54.781668   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:32:54.785438   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:55.263110   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:55.263409   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:55.263409   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:55.263409   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:55.277984   12872 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0401 11:32:55.279624   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:55.279624   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:55.279624   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:55.279624   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:55.284290   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:55.769490   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:55.769557   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:55.769557   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:55.769557   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:55.770139   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0401 11:32:55.775716   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:55.775716   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:55.775716   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:55.775716   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:55.778419   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 11:32:56.265398   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:56.265398   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:56.265398   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:56.265398   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:56.270654   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:56.272325   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:56.272384   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:56.272384   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:56.272384   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:56.281537   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:32:56.756443   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:56.756443   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:56.756630   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:56.756630   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:56.760917   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:56.763263   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:56.763324   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:56.763324   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:56.763324   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:56.767029   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:57.262705   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:57.262705   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.262705   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.262705   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.267441   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.269453   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:57.269453   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.269453   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.269453   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.274474   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:57.275456   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 11:32:57.275456   12872 pod_ready.go:81] duration metric: took 1m6.5262173s for pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.275456   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.275594   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500
	I0401 11:32:57.275655   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.275655   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.275655   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.282385   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:57.283695   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:32:57.283770   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.283770   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.283770   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.288264   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.288657   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:32:57.289183   12872 pod_ready.go:81] duration metric: took 13.7276ms for pod "kube-controller-manager-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.289183   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.289430   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m02
	I0401 11:32:57.289430   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.289430   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.289430   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.294032   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.295596   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:32:57.295596   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.295596   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.295596   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.300969   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.301879   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:32:57.301938   12872 pod_ready.go:81] duration metric: took 12.6953ms for pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.301938   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.302069   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:57.302111   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.302111   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.302111   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.305976   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:57.306919   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:57.306919   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.306919   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.306919   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.311071   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.812612   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:57.812647   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.812691   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.812691   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.818844   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:57.820008   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:57.820008   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.820539   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.820539   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.824148   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:58.312932   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:58.312978   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:58.312978   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:58.313023   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:58.318305   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:58.319502   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:58.319556   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:58.319556   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:58.319556   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:58.324368   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:58.812370   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:58.812483   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:58.812483   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:58.812483   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:58.818911   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:58.820518   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:58.820574   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:58.820634   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:58.820634   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:58.827748   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:59.314797   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:59.314797   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:59.314797   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:59.314893   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:59.322676   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:59.324278   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:59.324278   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:59.324340   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:59.324340   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:59.328308   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:59.329718   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:59.815381   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:59.815530   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:59.815530   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:59.815530   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:59.821681   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:59.822886   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:59.822886   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:59.822886   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:59.822886   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:59.827532   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:00.317123   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:00.317123   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:00.317123   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:00.317123   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:00.323325   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:00.325366   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:00.325488   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:00.325488   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:00.325520   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:00.329137   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:00.802753   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:00.802753   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:00.802860   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:00.802860   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:00.809084   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:00.810429   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:00.810429   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:00.810512   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:00.810512   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:00.813726   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:01.315138   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:01.315475   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:01.315475   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:01.315475   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:01.321812   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:01.323421   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:01.323421   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:01.323421   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:01.323421   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:01.328092   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:01.802654   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:01.802654   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:01.802654   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:01.802654   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:01.809300   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:01.810999   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:01.810999   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:01.811088   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:01.811088   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:01.815515   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:01.816621   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:33:02.306438   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:02.306553   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:02.306553   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:02.306553   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:02.311885   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:02.312896   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:02.312896   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:02.312896   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:02.312896   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:02.317751   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:02.806329   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:02.806481   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:02.806540   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:02.806540   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:02.810946   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:02.812949   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:02.812949   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:02.812949   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:02.812949   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:02.817267   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:03.307461   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:03.307461   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:03.307461   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:03.307699   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:03.312430   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:03.314380   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:03.314380   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:03.314380   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:03.314380   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:03.318977   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:03.805172   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:03.805172   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:03.805172   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:03.805172   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:03.810605   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:03.812532   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:03.812592   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:03.812592   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:03.812592   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:03.816481   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:03.818011   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:33:04.309723   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:04.309723   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:04.309723   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:04.309723   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:04.315548   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:04.316952   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:04.316952   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:04.316952   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:04.316952   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:04.323146   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:04.810654   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:04.810654   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:04.810654   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:04.810654   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:04.816227   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:04.817556   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:04.817556   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:04.817624   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:04.817624   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:04.821474   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:05.308035   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:05.308072   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:05.308072   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:05.308072   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:05.319868   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0401 11:33:05.321008   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:05.321008   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:05.321008   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:05.321008   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:05.325720   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:05.806864   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:05.806864   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:05.806864   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:05.806864   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:05.819639   12872 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0401 11:33:05.820852   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:05.820911   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:05.820911   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:05.820968   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:05.824148   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:05.825503   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:33:06.307490   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:06.307728   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:06.307728   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:06.307728   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:06.315563   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:33:06.316676   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:06.316676   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:06.316676   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:06.316676   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:06.321281   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:06.806080   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:06.806080   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:06.806080   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:06.806080   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:06.811694   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:06.813178   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:06.813178   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:06.813178   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:06.813178   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:06.819765   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:07.308436   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:07.308561   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:07.308561   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:07.308561   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:07.313986   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:07.316309   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:07.316309   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:07.316309   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:07.316309   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:07.398150   12872 round_trippers.go:574] Response Status: 200 OK in 81 milliseconds
	I0401 11:33:07.813041   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:07.813041   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:07.813041   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:07.813041   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:07.818591   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:07.819857   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:07.819857   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:07.819857   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:07.819857   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:07.824805   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:08.315818   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:08.315818   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:08.315818   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:08.315818   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:08.322023   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:08.323094   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:08.323094   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:08.323094   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:08.323094   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:08.326886   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:08.328039   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:33:08.805316   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:08.805316   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:08.805316   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:08.805316   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:08.814304   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:33:08.816116   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:08.816116   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:08.816116   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:08.816116   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:08.821110   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:09.306658   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:09.306745   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:09.306745   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:09.306745   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:09.312014   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:09.313210   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:09.313210   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:09.313210   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:09.313210   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:09.317798   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:09.805561   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:09.805561   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:09.805561   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:09.805561   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:09.812319   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:09.813509   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:09.813509   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:09.813509   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:09.813509   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:09.817099   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:10.305608   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:10.305608   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.305691   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.305691   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.310882   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:10.312576   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:10.312695   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.312695   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.312695   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.318132   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:10.811690   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:10.811690   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.811690   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.811690   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.817361   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:10.818794   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:10.818794   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.818794   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.818794   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.823401   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:10.823893   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.823893   12872 pod_ready.go:81] duration metric: took 13.5218606s for pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.823893   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28zds" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.823893   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zds
	I0401 11:33:10.823893   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.823893   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.823893   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.829079   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:10.831010   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:33:10.831126   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.831126   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.831126   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.840574   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:33:10.840574   12872 pod_ready.go:92] pod "kube-proxy-28zds" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.840574   12872 pod_ready.go:81] duration metric: took 16.6807ms for pod "kube-proxy-28zds" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.840574   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccgpw" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.841397   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ccgpw
	I0401 11:33:10.841514   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.841514   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.841514   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.845532   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:10.847068   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:10.847236   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.847236   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.847353   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.851950   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:10.853103   12872 pod_ready.go:92] pod "kube-proxy-ccgpw" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.853181   12872 pod_ready.go:81] duration metric: took 12.5283ms for pod "kube-proxy-ccgpw" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.853181   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqcpv" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.853257   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqcpv
	I0401 11:33:10.853257   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.853257   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.853257   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.859719   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:10.861086   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:33:10.861139   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.861139   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.861139   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.864432   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:10.866119   12872 pod_ready.go:92] pod "kube-proxy-hqcpv" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.866162   12872 pod_ready.go:81] duration metric: took 12.9811ms for pod "kube-proxy-hqcpv" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.866205   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.866255   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500
	I0401 11:33:10.866323   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.866323   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.866323   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.870216   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:10.871442   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:33:10.871442   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.871442   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.871442   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.876032   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:10.876808   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.876897   12872 pod_ready.go:81] duration metric: took 10.6923ms for pod "kube-scheduler-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.876897   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:11.017444   12872 request.go:629] Waited for 140.4767ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m02
	I0401 11:33:11.017751   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m02
	I0401 11:33:11.017751   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:11.017751   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:11.017751   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:11.026510   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:33:11.221802   12872 request.go:629] Waited for 194.3132ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:33:11.222073   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:33:11.222073   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:11.222073   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:11.222073   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:11.227658   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:11.228695   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:11.228750   12872 pod_ready.go:81] duration metric: took 351.8502ms for pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:11.228808   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:11.424540   12872 request.go:629] Waited for 195.6764ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m03
	I0401 11:33:11.424746   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m03
	I0401 11:33:11.424746   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:11.424746   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:11.424746   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:11.430964   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:11.615184   12872 request.go:629] Waited for 182.6737ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:11.615550   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:11.615550   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:11.615550   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:11.615550   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:11.621131   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:11.623152   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:11.623152   12872 pod_ready.go:81] duration metric: took 394.3411ms for pod "kube-scheduler-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:11.623252   12872 pod_ready.go:38] duration metric: took 1m25.2803693s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:33:11.623252   12872 api_server.go:52] waiting for apiserver process to appear ...
	I0401 11:33:11.635956   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0401 11:33:11.662956   12872 logs.go:276] 2 containers: [62e15884a85d b7c4892b0a5d]
	I0401 11:33:11.671938   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0401 11:33:11.700969   12872 logs.go:276] 1 containers: [b628ead59f77]
	I0401 11:33:11.709938   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0401 11:33:11.737948   12872 logs.go:276] 0 containers: []
	W0401 11:33:11.737948   12872 logs.go:278] No container was found matching "coredns"
	I0401 11:33:11.746937   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0401 11:33:11.775811   12872 logs.go:276] 1 containers: [d08e16bb0ded]
	I0401 11:33:11.788737   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0401 11:33:11.832987   12872 logs.go:276] 1 containers: [cd0b52822b82]
	I0401 11:33:11.843484   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0401 11:33:11.873827   12872 logs.go:276] 2 containers: [fc98c6d4ad09 3291e9558a9b]
	I0401 11:33:11.885750   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0401 11:33:11.911292   12872 logs.go:276] 1 containers: [db14ad1d26da]
	I0401 11:33:11.912375   12872 logs.go:123] Gathering logs for kube-apiserver [b7c4892b0a5d] ...
	I0401 11:33:11.912375   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7c4892b0a5d"
	I0401 11:33:12.000710   12872 logs.go:123] Gathering logs for etcd [b628ead59f77] ...
	I0401 11:33:12.000710   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b628ead59f77"
	I0401 11:33:12.064608   12872 logs.go:123] Gathering logs for kube-scheduler [d08e16bb0ded] ...
	I0401 11:33:12.064608   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08e16bb0ded"
	I0401 11:33:12.118358   12872 logs.go:123] Gathering logs for kube-controller-manager [fc98c6d4ad09] ...
	I0401 11:33:12.118358   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc98c6d4ad09"
	I0401 11:33:12.169313   12872 logs.go:123] Gathering logs for kube-controller-manager [3291e9558a9b] ...
	I0401 11:33:12.169313   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3291e9558a9b"
	I0401 11:33:12.204067   12872 logs.go:123] Gathering logs for dmesg ...
	I0401 11:33:12.204067   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:33:12.257145   12872 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:33:12.257299   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:33:12.904903   12872 logs.go:123] Gathering logs for kube-apiserver [62e15884a85d] ...
	I0401 11:33:12.904903   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e15884a85d"
	I0401 11:33:12.951094   12872 logs.go:123] Gathering logs for kindnet [db14ad1d26da] ...
	I0401 11:33:12.951094   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db14ad1d26da"
	I0401 11:33:12.991685   12872 logs.go:123] Gathering logs for Docker ...
	I0401 11:33:12.991814   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0401 11:33:13.069278   12872 logs.go:123] Gathering logs for kubelet ...
	I0401 11:33:13.069278   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:33:13.146727   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:13.146727   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:13.147725   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:13.147871   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:13.168730   12872 logs.go:123] Gathering logs for kube-proxy [cd0b52822b82] ...
	I0401 11:33:13.168730   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd0b52822b82"
	I0401 11:33:13.210442   12872 logs.go:123] Gathering logs for container status ...
	I0401 11:33:13.210514   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:33:13.322647   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:13.322789   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:33:13.322918   12872 out.go:239] X Problems detected in kubelet:
	W0401 11:33:13.323003   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:13.323003   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:13.323047   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:13.323047   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:13.323047   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:13.323047   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:33:23.344682   12872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 11:33:23.375483   12872 api_server.go:72] duration metric: took 1m37.4842717s to wait for apiserver process to appear ...
	I0401 11:33:23.375483   12872 api_server.go:88] waiting for apiserver healthz status ...
	I0401 11:33:23.388456   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0401 11:33:23.418718   12872 logs.go:276] 2 containers: [62e15884a85d b7c4892b0a5d]
	I0401 11:33:23.428984   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0401 11:33:23.452693   12872 logs.go:276] 1 containers: [b628ead59f77]
	I0401 11:33:23.463666   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0401 11:33:23.488468   12872 logs.go:276] 0 containers: []
	W0401 11:33:23.488468   12872 logs.go:278] No container was found matching "coredns"
	I0401 11:33:23.499029   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0401 11:33:23.526218   12872 logs.go:276] 1 containers: [d08e16bb0ded]
	I0401 11:33:23.536258   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0401 11:33:23.561780   12872 logs.go:276] 1 containers: [cd0b52822b82]
	I0401 11:33:23.571387   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0401 11:33:23.594293   12872 logs.go:276] 2 containers: [fc98c6d4ad09 3291e9558a9b]
	I0401 11:33:23.603990   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0401 11:33:23.636657   12872 logs.go:276] 1 containers: [db14ad1d26da]
	I0401 11:33:23.636747   12872 logs.go:123] Gathering logs for kube-proxy [cd0b52822b82] ...
	I0401 11:33:23.636747   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd0b52822b82"
	I0401 11:33:23.665753   12872 logs.go:123] Gathering logs for kube-controller-manager [fc98c6d4ad09] ...
	I0401 11:33:23.665753   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc98c6d4ad09"
	I0401 11:33:23.718225   12872 logs.go:123] Gathering logs for container status ...
	I0401 11:33:23.718225   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:33:23.843825   12872 logs.go:123] Gathering logs for kubelet ...
	I0401 11:33:23.843914   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:33:23.916302   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:23.916302   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:23.917017   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:23.917397   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:23.938137   12872 logs.go:123] Gathering logs for dmesg ...
	I0401 11:33:23.938137   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:33:23.965704   12872 logs.go:123] Gathering logs for kube-scheduler [d08e16bb0ded] ...
	I0401 11:33:23.965704   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08e16bb0ded"
	I0401 11:33:24.038052   12872 logs.go:123] Gathering logs for etcd [b628ead59f77] ...
	I0401 11:33:24.038052   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b628ead59f77"
	I0401 11:33:24.097077   12872 logs.go:123] Gathering logs for kube-controller-manager [3291e9558a9b] ...
	I0401 11:33:24.097077   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3291e9558a9b"
	I0401 11:33:24.130240   12872 logs.go:123] Gathering logs for kindnet [db14ad1d26da] ...
	I0401 11:33:24.130346   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db14ad1d26da"
	I0401 11:33:24.164959   12872 logs.go:123] Gathering logs for Docker ...
	I0401 11:33:24.165038   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0401 11:33:24.242554   12872 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:33:24.242554   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:33:24.525210   12872 logs.go:123] Gathering logs for kube-apiserver [62e15884a85d] ...
	I0401 11:33:24.525210   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e15884a85d"
	I0401 11:33:24.568905   12872 logs.go:123] Gathering logs for kube-apiserver [b7c4892b0a5d] ...
	I0401 11:33:24.568905   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7c4892b0a5d"
	I0401 11:33:24.661393   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:24.661393   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:33:24.661393   12872 out.go:239] X Problems detected in kubelet:
	W0401 11:33:24.661393   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:24.661393   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:24.661710   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:24.661710   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:24.661710   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:24.661793   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:33:34.686851   12872 api_server.go:253] Checking apiserver healthz at https://172.19.153.73:8443/healthz ...
	I0401 11:33:34.694523   12872 api_server.go:279] https://172.19.153.73:8443/healthz returned 200:
	ok
	I0401 11:33:34.694523   12872 round_trippers.go:463] GET https://172.19.153.73:8443/version
	I0401 11:33:34.694705   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:34.694740   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:34.694740   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:34.695859   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 11:33:34.695859   12872 api_server.go:141] control plane version: v1.29.3
	I0401 11:33:34.695859   12872 api_server.go:131] duration metric: took 11.3202957s to wait for apiserver health ...
	I0401 11:33:34.695859   12872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 11:33:34.706995   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0401 11:33:34.737571   12872 logs.go:276] 2 containers: [62e15884a85d b7c4892b0a5d]
	I0401 11:33:34.748310   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0401 11:33:34.778031   12872 logs.go:276] 1 containers: [b628ead59f77]
	I0401 11:33:34.786732   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0401 11:33:34.819067   12872 logs.go:276] 0 containers: []
	W0401 11:33:34.819164   12872 logs.go:278] No container was found matching "coredns"
	I0401 11:33:34.829696   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0401 11:33:34.854494   12872 logs.go:276] 1 containers: [d08e16bb0ded]
	I0401 11:33:34.862141   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0401 11:33:34.898930   12872 logs.go:276] 1 containers: [cd0b52822b82]
	I0401 11:33:34.911428   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0401 11:33:34.937977   12872 logs.go:276] 2 containers: [fc98c6d4ad09 3291e9558a9b]
	I0401 11:33:34.950507   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0401 11:33:34.978431   12872 logs.go:276] 1 containers: [db14ad1d26da]
	I0401 11:33:34.978431   12872 logs.go:123] Gathering logs for kindnet [db14ad1d26da] ...
	I0401 11:33:34.978431   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db14ad1d26da"
	I0401 11:33:35.014419   12872 logs.go:123] Gathering logs for Docker ...
	I0401 11:33:35.014419   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0401 11:33:35.093393   12872 logs.go:123] Gathering logs for container status ...
	I0401 11:33:35.093393   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:33:35.209273   12872 logs.go:123] Gathering logs for dmesg ...
	I0401 11:33:35.209273   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:33:35.240517   12872 logs.go:123] Gathering logs for kube-apiserver [b7c4892b0a5d] ...
	I0401 11:33:35.240622   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7c4892b0a5d"
	I0401 11:33:35.331847   12872 logs.go:123] Gathering logs for etcd [b628ead59f77] ...
	I0401 11:33:35.331847   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b628ead59f77"
	I0401 11:33:35.383048   12872 logs.go:123] Gathering logs for kube-controller-manager [fc98c6d4ad09] ...
	I0401 11:33:35.383048   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc98c6d4ad09"
	I0401 11:33:35.454641   12872 logs.go:123] Gathering logs for kube-proxy [cd0b52822b82] ...
	I0401 11:33:35.454641   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd0b52822b82"
	I0401 11:33:35.488141   12872 logs.go:123] Gathering logs for kube-controller-manager [3291e9558a9b] ...
	I0401 11:33:35.488202   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3291e9558a9b"
	I0401 11:33:35.521732   12872 logs.go:123] Gathering logs for kubelet ...
	I0401 11:33:35.521732   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:33:35.592733   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:35.593137   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:35.593524   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:35.593750   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:35.616066   12872 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:33:35.616066   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:33:35.938008   12872 logs.go:123] Gathering logs for kube-apiserver [62e15884a85d] ...
	I0401 11:33:35.938008   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e15884a85d"
	I0401 11:33:35.983969   12872 logs.go:123] Gathering logs for kube-scheduler [d08e16bb0ded] ...
	I0401 11:33:35.984049   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08e16bb0ded"
	I0401 11:33:36.039526   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:36.039526   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:33:36.039526   12872 out.go:239] X Problems detected in kubelet:
	W0401 11:33:36.039526   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:36.039526   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:36.039526   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:36.039526   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:36.039526   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:36.040571   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:33:46.059767   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:33:46.059767   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:46.059767   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:46.059767   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:46.076244   12872 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0401 11:33:46.091991   12872 system_pods.go:59] 24 kube-system pods found
	I0401 11:33:46.091991   12872 system_pods.go:61] "coredns-76f75df574-4xvlf" [d2a6344b-f0f6-49a1-9135-2a2ae21228b9] Running
	I0401 11:33:46.091991   12872 system_pods.go:61] "coredns-76f75df574-vjslq" [81ef7e9b-acf1-411f-8f00-bb9fea08056f] Running
	I0401 11:33:46.092375   12872 system_pods.go:61] "etcd-ha-401500" [532eef29-0a6a-4b38-82a7-522c28eb8d64] Running
	I0401 11:33:46.092403   12872 system_pods.go:61] "etcd-ha-401500-m02" [258b489e-95c8-4bfc-931f-2392bd619257] Running
	I0401 11:33:46.092403   12872 system_pods.go:61] "etcd-ha-401500-m03" [12ed1798-15e1-45fb-bc01-cb7d8cb56be1] Running
	I0401 11:33:46.092403   12872 system_pods.go:61] "kindnet-8f8ts" [bd227165-7098-4498-8ba6-6f903edfef84] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kindnet-92s2r" [5d6301b7-cb61-401f-9b6d-1a77775b65ac] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kindnet-v22wx" [86d50e2c-cb46-475b-9ec9-e16549903f65] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kube-apiserver-ha-401500" [bd79feb9-6db9-49ab-87ec-debf9556277f] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kube-apiserver-ha-401500-m02" [c092dcfe-f711-419d-b172-05670e1c4b53] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kube-apiserver-ha-401500-m03" [4e3c989c-2728-4eea-85f0-e98d51496a8e] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kube-controller-manager-ha-401500" [aa7dc05b-ee68-49fa-9a08-60e079f62848] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-controller-manager-ha-401500-m02" [2755a2be-c5d2-4df7-9572-f2bde8aa9314] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-controller-manager-ha-401500-m03" [f16272c3-226f-4480-997f-4e3269042d2d] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-proxy-28zds" [bb38f484-6c10-4874-a3a7-dba22c1720a0] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-proxy-ccgpw" [e8debcf2-d756-4fc4-9931-102b1eef4ee5] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-proxy-hqcpv" [edf6bd75-05e1-479f-b190-13d867bb7ef5] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-scheduler-ha-401500" [d727c9ec-579a-4449-90b1-86b790573abb] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-scheduler-ha-401500-m02" [b38ecb47-0b33-4432-a060-67e352fc9d73] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-scheduler-ha-401500-m03" [bbe0b265-6b78-4984-9190-014904684180] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-vip-ha-401500" [b1386d4f-d6ab-4cfd-91e4-39539d0e2854] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-vip-ha-401500-m02" [d5cc5b36-52ad-4da8-b75a-8cfce3b3391f] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-vip-ha-401500-m03" [55b3a39a-a4d7-435b-ba14-3139eef4fef8] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "storage-provisioner" [373b3186-34e3-4ae2-8ddf-4701d665e768] Running
	I0401 11:33:46.092673   12872 system_pods.go:74] duration metric: took 11.3966599s to wait for pod list to return data ...
	I0401 11:33:46.092673   12872 default_sa.go:34] waiting for default service account to be created ...
	I0401 11:33:46.092840   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/default/serviceaccounts
	I0401 11:33:46.092840   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:46.092840   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:46.092966   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:46.098848   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:46.099795   12872 default_sa.go:45] found service account: "default"
	I0401 11:33:46.099795   12872 default_sa.go:55] duration metric: took 7.1219ms for default service account to be created ...
	I0401 11:33:46.099908   12872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 11:33:46.099908   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:33:46.100028   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:46.100028   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:46.100028   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:46.110230   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:33:46.121996   12872 system_pods.go:86] 24 kube-system pods found
	I0401 11:33:46.121996   12872 system_pods.go:89] "coredns-76f75df574-4xvlf" [d2a6344b-f0f6-49a1-9135-2a2ae21228b9] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "coredns-76f75df574-vjslq" [81ef7e9b-acf1-411f-8f00-bb9fea08056f] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "etcd-ha-401500" [532eef29-0a6a-4b38-82a7-522c28eb8d64] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "etcd-ha-401500-m02" [258b489e-95c8-4bfc-931f-2392bd619257] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "etcd-ha-401500-m03" [12ed1798-15e1-45fb-bc01-cb7d8cb56be1] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kindnet-8f8ts" [bd227165-7098-4498-8ba6-6f903edfef84] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kindnet-92s2r" [5d6301b7-cb61-401f-9b6d-1a77775b65ac] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kindnet-v22wx" [86d50e2c-cb46-475b-9ec9-e16549903f65] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-apiserver-ha-401500" [bd79feb9-6db9-49ab-87ec-debf9556277f] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-apiserver-ha-401500-m02" [c092dcfe-f711-419d-b172-05670e1c4b53] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-apiserver-ha-401500-m03" [4e3c989c-2728-4eea-85f0-e98d51496a8e] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-controller-manager-ha-401500" [aa7dc05b-ee68-49fa-9a08-60e079f62848] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-controller-manager-ha-401500-m02" [2755a2be-c5d2-4df7-9572-f2bde8aa9314] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-controller-manager-ha-401500-m03" [f16272c3-226f-4480-997f-4e3269042d2d] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-proxy-28zds" [bb38f484-6c10-4874-a3a7-dba22c1720a0] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-proxy-ccgpw" [e8debcf2-d756-4fc4-9931-102b1eef4ee5] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-proxy-hqcpv" [edf6bd75-05e1-479f-b190-13d867bb7ef5] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-scheduler-ha-401500" [d727c9ec-579a-4449-90b1-86b790573abb] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-scheduler-ha-401500-m02" [b38ecb47-0b33-4432-a060-67e352fc9d73] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-scheduler-ha-401500-m03" [bbe0b265-6b78-4984-9190-014904684180] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-vip-ha-401500" [b1386d4f-d6ab-4cfd-91e4-39539d0e2854] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-vip-ha-401500-m02" [d5cc5b36-52ad-4da8-b75a-8cfce3b3391f] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-vip-ha-401500-m03" [55b3a39a-a4d7-435b-ba14-3139eef4fef8] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "storage-provisioner" [373b3186-34e3-4ae2-8ddf-4701d665e768] Running
	I0401 11:33:46.121996   12872 system_pods.go:126] duration metric: took 22.0875ms to wait for k8s-apps to be running ...
	I0401 11:33:46.121996   12872 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 11:33:46.134473   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 11:33:46.164622   12872 system_svc.go:56] duration metric: took 41.6057ms WaitForService to wait for kubelet
	I0401 11:33:46.164622   12872 kubeadm.go:576] duration metric: took 2m0.2732495s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:33:46.164702   12872 node_conditions.go:102] verifying NodePressure condition ...
	I0401 11:33:46.164702   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes
	I0401 11:33:46.164702   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:46.164702   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:46.164702   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:46.169745   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:46.172452   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:33:46.172510   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:33:46.172510   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:33:46.172510   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:33:46.172510   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:33:46.172510   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:33:46.172510   12872 node_conditions.go:105] duration metric: took 7.8082ms to run NodePressure ...
	I0401 11:33:46.172510   12872 start.go:240] waiting for startup goroutines ...
	I0401 11:33:46.172626   12872 start.go:254] writing updated cluster config ...
	I0401 11:33:46.186814   12872 ssh_runner.go:195] Run: rm -f paused
	I0401 11:33:46.345109   12872 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 11:33:46.347954   12872 out.go:177] * Done! kubectl is now configured to use "ha-401500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.278699320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.279558110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.393217035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.393494232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.393587430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.394014625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.476032933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.476340630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.476735925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.483332845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 cri-dockerd[1236]: time="2024-04-01T11:23:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ec0ebe3869c2e14a6d44daf4e8f82997e2dbe78e230e4717f6b006a33724e5e/resolv.conf as [nameserver 172.19.144.1]"
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.838583449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.838694147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.838715847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.839475438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:34:27 ha-401500 dockerd[1351]: time="2024-04-01T11:34:27.042482365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 11:34:27 ha-401500 dockerd[1351]: time="2024-04-01T11:34:27.042811363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 11:34:27 ha-401500 dockerd[1351]: time="2024-04-01T11:34:27.042861363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:34:27 ha-401500 dockerd[1351]: time="2024-04-01T11:34:27.043165061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:34:27 ha-401500 cri-dockerd[1236]: time="2024-04-01T11:34:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b061bd4ee58e57f9d7d8730401159795cc67f7ee12f8ba91a863233ca44c1931/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 01 11:34:28 ha-401500 cri-dockerd[1236]: time="2024-04-01T11:34:28Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 01 11:34:28 ha-401500 dockerd[1351]: time="2024-04-01T11:34:28.723032022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 11:34:28 ha-401500 dockerd[1351]: time="2024-04-01T11:34:28.723267320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 11:34:28 ha-401500 dockerd[1351]: time="2024-04-01T11:34:28.723289220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:34:28 ha-401500 dockerd[1351]: time="2024-04-01T11:34:28.724557008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a5f0f2a70ea86       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   b061bd4ee58e5       busybox-7fdf7869d9-f5xk7
	7060906f8cfb4       6e38f40d628db                                                                                         11 minutes ago       Running             storage-provisioner       0                   9ec0ebe3869c2       storage-provisioner
	019f28c8ae9c2       cbb01a7bd410d                                                                                         11 minutes ago       Running             coredns                   0                   4e22619d4f531       coredns-76f75df574-4xvlf
	5cf28c4d18269       cbb01a7bd410d                                                                                         11 minutes ago       Running             coredns                   0                   953d3ea584fb7       coredns-76f75df574-vjslq
	6b3a35c1df165       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              11 minutes ago       Running             kindnet-cni               0                   35c87d7595587       kindnet-v22wx
	3b771f391aa27       a1d263b5dc5b0                                                                                         11 minutes ago       Running             kube-proxy                0                   52e412ee73928       kube-proxy-hqcpv
	55b7d7fcbecfb       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     12 minutes ago       Running             kube-vip                  0                   6b07eb59f148c       kube-vip-ha-401500
	c01764f3eda1e       39f995c9f1996                                                                                         12 minutes ago       Running             kube-apiserver            0                   73e0584affcfd       kube-apiserver-ha-401500
	2fcf6eff5adbe       3861cfcd7c04c                                                                                         12 minutes ago       Running             etcd                      0                   ac7bd8f02839f       etcd-ha-401500
	d563352b33191       6052a25da3f97                                                                                         12 minutes ago       Running             kube-controller-manager   0                   8ea839602f322       kube-controller-manager-ha-401500
	57c210811c209       8c390d98f50c0                                                                                         12 minutes ago       Running             kube-scheduler            0                   c3c232b9bbe6f       kube-scheduler-ha-401500
	
	
	==> coredns [019f28c8ae9c] <==
	[INFO] 10.244.2.2:54522 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001980082s
	[INFO] 10.244.2.2:59489 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.115079073s
	[INFO] 10.244.1.2:50252 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.044782091s
	[INFO] 10.244.0.4:43900 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033609794s
	[INFO] 10.244.0.4:39947 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249898s
	[INFO] 10.244.0.4:35641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000488095s
	[INFO] 10.244.2.2:35908 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000280598s
	[INFO] 10.244.2.2:41756 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.018302235s
	[INFO] 10.244.2.2:52057 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000398296s
	[INFO] 10.244.2.2:41041 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164798s
	[INFO] 10.244.1.2:56971 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108999s
	[INFO] 10.244.1.2:56448 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000185098s
	[INFO] 10.244.1.2:34570 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000332597s
	[INFO] 10.244.0.4:35168 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190098s
	[INFO] 10.244.2.2:52214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130898s
	[INFO] 10.244.2.2:52209 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160898s
	[INFO] 10.244.2.2:53111 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158099s
	[INFO] 10.244.2.2:39428 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140698s
	[INFO] 10.244.1.2:57304 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135399s
	[INFO] 10.244.0.4:60345 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000274498s
	[INFO] 10.244.2.2:48660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134799s
	[INFO] 10.244.2.2:39169 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102699s
	[INFO] 10.244.1.2:33430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101899s
	[INFO] 10.244.1.2:51884 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000672s
	[INFO] 10.244.1.2:45317 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000577s
	
	
	==> coredns [5cf28c4d1826] <==
	[INFO] 10.244.0.4:57108 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000265898s
	[INFO] 10.244.0.4:58394 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.030456323s
	[INFO] 10.244.0.4:43973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212498s
	[INFO] 10.244.0.4:51457 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123599s
	[INFO] 10.244.2.2:38721 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.110294596s
	[INFO] 10.244.2.2:54063 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000233198s
	[INFO] 10.244.2.2:47298 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060999s
	[INFO] 10.244.2.2:53583 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131999s
	[INFO] 10.244.1.2:34615 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078199s
	[INFO] 10.244.1.2:35189 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000067899s
	[INFO] 10.244.1.2:36491 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000254598s
	[INFO] 10.244.1.2:52312 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099599s
	[INFO] 10.244.1.2:34153 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000213299s
	[INFO] 10.244.0.4:43708 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130699s
	[INFO] 10.244.0.4:41927 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000197298s
	[INFO] 10.244.0.4:37456 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000638s
	[INFO] 10.244.1.2:33504 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000539595s
	[INFO] 10.244.1.2:58378 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126299s
	[INFO] 10.244.1.2:56306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000631s
	[INFO] 10.244.0.4:60083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234898s
	[INFO] 10.244.0.4:42878 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215198s
	[INFO] 10.244.0.4:53304 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306097s
	[INFO] 10.244.2.2:44856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102499s
	[INFO] 10.244.2.2:55794 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110099s
	[INFO] 10.244.1.2:43653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000248898s
	
	
	==> describe nodes <==
	Name:               ha-401500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-401500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=ha-401500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T11_23_29_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 11:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 11:35:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 11:34:33 +0000   Mon, 01 Apr 2024 11:23:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 11:34:33 +0000   Mon, 01 Apr 2024 11:23:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 11:34:33 +0000   Mon, 01 Apr 2024 11:23:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 11:34:33 +0000   Mon, 01 Apr 2024 11:23:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.153.73
	  Hostname:    ha-401500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 51422a693e5d4c32850905b4a00e3c09
	  System UUID:                5ddecb87-f7c6-5c44-af78-64f197febc43
	  Boot ID:                    80ab2a7b-f7d8-4389-970c-c35c9af0e0bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-f5xk7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 coredns-76f75df574-4xvlf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 coredns-76f75df574-vjslq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-ha-401500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-v22wx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-401500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-401500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-hqcpv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-401500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-401500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-401500 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-401500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-401500 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node ha-401500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node ha-401500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node ha-401500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-401500 event: Registered Node ha-401500 in Controller
	  Normal  NodeReady                11m                kubelet          Node ha-401500 status is now: NodeReady
	  Normal  RegisteredNode           7m44s              node-controller  Node ha-401500 event: Registered Node ha-401500 in Controller
	  Normal  RegisteredNode           2m26s              node-controller  Node ha-401500 event: Registered Node ha-401500 in Controller
	
	
	Name:               ha-401500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-401500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=ha-401500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T11_27_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 11:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 11:35:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 11:34:57 +0000   Mon, 01 Apr 2024 11:27:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 11:34:57 +0000   Mon, 01 Apr 2024 11:27:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 11:34:57 +0000   Mon, 01 Apr 2024 11:27:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 11:34:57 +0000   Mon, 01 Apr 2024 11:27:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.149.50
	  Hostname:    ha-401500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3811c44e7a264a1ea0a703dad5809815
	  System UUID:                6b38e67a-6be9-c344-89c9-dafa56ee053a
	  Boot ID:                    b4385542-57cb-4255-b5f6-eb5d30702515
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-q7xs6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 etcd-ha-401500-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m6s
	  kube-system                 kindnet-92s2r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m7s
	  kube-system                 kube-apiserver-ha-401500-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-controller-manager-ha-401500-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-proxy-28zds                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-scheduler-ha-401500-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-vip-ha-401500-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m                   kube-proxy       
	  Normal  Starting                 8m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m7s (x2 over 8m7s)  kubelet          Node ha-401500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m7s (x2 over 8m7s)  kubelet          Node ha-401500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m7s (x2 over 8m7s)  kubelet          Node ha-401500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m4s                 node-controller  Node ha-401500-m02 event: Registered Node ha-401500-m02 in Controller
	  Normal  NodeReady                7m50s                kubelet          Node ha-401500-m02 status is now: NodeReady
	  Normal  RegisteredNode           7m44s                node-controller  Node ha-401500-m02 event: Registered Node ha-401500-m02 in Controller
	  Normal  RegisteredNode           2m26s                node-controller  Node ha-401500-m02 event: Registered Node ha-401500-m02 in Controller
	
	
	Name:               ha-401500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-401500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=ha-401500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T11_31_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 11:31:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 11:35:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 11:34:34 +0000   Mon, 01 Apr 2024 11:31:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 11:34:34 +0000   Mon, 01 Apr 2024 11:31:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 11:34:34 +0000   Mon, 01 Apr 2024 11:31:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 11:34:34 +0000   Mon, 01 Apr 2024 11:31:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.145.208
	  Hostname:    ha-401500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 17b921dbe6774dc4ba1f49208575ffe0
	  System UUID:                dfcb8064-0682-e848-ac60-5df21a749ba5
	  Boot ID:                    f6f66cd0-32ce-4c94-b291-e8fa4bc868dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-gr89z                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 etcd-ha-401500-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m2s
	  kube-system                 kindnet-8f8ts                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m4s
	  kube-system                 kube-apiserver-ha-401500-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-controller-manager-ha-401500-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-ccgpw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-ha-401500-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-vip-ha-401500-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  Starting                 4m4s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m4s)  kubelet          Node ha-401500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m4s)  kubelet          Node ha-401500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m4s)  kubelet          Node ha-401500-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-401500-m03 event: Registered Node ha-401500-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-401500-m03 event: Registered Node ha-401500-m03 in Controller
	  Normal  RegisteredNode           2m26s                node-controller  Node ha-401500-m03 event: Registered Node ha-401500-m03 in Controller
	
	
	==> dmesg <==
	[  +1.967084] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.197690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 1 11:22] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.188352] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[ +32.951072] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.134308] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.586944] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.201745] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.232104] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +2.872685] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.237073] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.228434] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.313386] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[Apr 1 11:23] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.124610] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.741406] systemd-fstab-generator[1541]: Ignoring "noauto" option for root device
	[  +6.547189] systemd-fstab-generator[1803]: Ignoring "noauto" option for root device
	[  +0.117853] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.186350] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.832357] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[ +13.740676] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.034429] kauditd_printk_skb: 29 callbacks suppressed
	[Apr 1 11:27] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [2fcf6eff5adb] <==
	{"level":"warn","ts":"2024-04-01T11:31:33.137173Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://172.19.145.208:2380/version","remote-member-id":"dc7977933efe4c2a","error":"Get \"https://172.19.145.208:2380/version\": dial tcp 172.19.145.208:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T11:31:33.137588Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"dc7977933efe4c2a","error":"Get \"https://172.19.145.208:2380/version\": dial tcp 172.19.145.208:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T11:31:33.705269Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"dc7977933efe4c2a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-04-01T11:31:35.10515Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"dc7977933efe4c2a"}
	{"level":"info","ts":"2024-04-01T11:31:35.121686Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"42cac7d33756cbeb","remote-peer-id":"dc7977933efe4c2a"}
	{"level":"info","ts":"2024-04-01T11:31:35.128009Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"42cac7d33756cbeb","remote-peer-id":"dc7977933efe4c2a"}
	{"level":"info","ts":"2024-04-01T11:31:35.178714Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"42cac7d33756cbeb","to":"dc7977933efe4c2a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-01T11:31:35.178949Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"42cac7d33756cbeb","remote-peer-id":"dc7977933efe4c2a"}
	{"level":"info","ts":"2024-04-01T11:31:35.353419Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"42cac7d33756cbeb","to":"dc7977933efe4c2a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-01T11:31:35.353477Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"42cac7d33756cbeb","remote-peer-id":"dc7977933efe4c2a"}
	{"level":"warn","ts":"2024-04-01T11:31:35.631631Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"dc7977933efe4c2a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-04-01T11:31:36.798721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.641981ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.19.153.73\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-04-01T11:31:36.798866Z","caller":"traceutil/trace.go:171","msg":"trace[445950869] range","detail":"{range_begin:/registry/masterleases/172.19.153.73; range_end:; response_count:1; response_revision:1588; }","duration":"253.855784ms","start":"2024-04-01T11:31:36.544995Z","end":"2024-04-01T11:31:36.798851Z","steps":["trace[445950869] 'range keys from in-memory index tree'  (duration: 252.00836ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T11:31:36.992624Z","caller":"traceutil/trace.go:171","msg":"trace[1377278168] linearizableReadLoop","detail":"{readStateIndex:1765; appliedIndex:1765; }","duration":"147.270046ms","start":"2024-04-01T11:31:36.845333Z","end":"2024-04-01T11:31:36.992603Z","steps":["trace[1377278168] 'read index received'  (duration: 147.265046ms)","trace[1377278168] 'applied index is now lower than readState.Index'  (duration: 3.8µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T11:31:36.993279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.876855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-401500-m03\" ","response":"range_response_count:1 size:3375"}
	{"level":"info","ts":"2024-04-01T11:31:36.993333Z","caller":"traceutil/trace.go:171","msg":"trace[2009747453] range","detail":"{range_begin:/registry/minions/ha-401500-m03; range_end:; response_count:1; response_revision:1588; }","duration":"148.021256ms","start":"2024-04-01T11:31:36.845302Z","end":"2024-04-01T11:31:36.993323Z","steps":["trace[2009747453] 'agreement among raft nodes before linearized reading'  (duration: 147.391548ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T11:31:37.099482Z","caller":"traceutil/trace.go:171","msg":"trace[1982563520] transaction","detail":"{read_only:false; response_revision:1589; number_of_response:1; }","duration":"104.865315ms","start":"2024-04-01T11:31:36.994598Z","end":"2024-04-01T11:31:37.099463Z","steps":["trace[1982563520] 'process raft request'  (duration: 94.462485ms)","trace[1982563520] 'compare'  (duration: 10.192027ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T11:31:42.442758Z","caller":"traceutil/trace.go:171","msg":"trace[1593952810] transaction","detail":"{read_only:false; response_revision:1606; number_of_response:1; }","duration":"111.286294ms","start":"2024-04-01T11:31:42.33145Z","end":"2024-04-01T11:31:42.442737Z","steps":["trace[1593952810] 'process raft request'  (duration: 111.050691ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T11:31:44.07045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42cac7d33756cbeb switched to configuration voters=(4812878861779258347 7383360408398767383 15886860634826886186)"}
	{"level":"info","ts":"2024-04-01T11:31:44.070627Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"4a52e1ae85a365b0","local-member-id":"42cac7d33756cbeb"}
	{"level":"info","ts":"2024-04-01T11:31:44.070656Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"42cac7d33756cbeb","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"dc7977933efe4c2a"}
	{"level":"info","ts":"2024-04-01T11:33:07.588851Z","caller":"traceutil/trace.go:171","msg":"trace[684732927] transaction","detail":"{read_only:false; response_revision:1861; number_of_response:1; }","duration":"184.475436ms","start":"2024-04-01T11:33:07.404351Z","end":"2024-04-01T11:33:07.588826Z","steps":["trace[684732927] 'process raft request'  (duration: 88.81821ms)","trace[684732927] 'compare'  (duration: 95.569827ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T11:33:21.177453Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1088}
	{"level":"info","ts":"2024-04-01T11:33:21.286572Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1088,"took":"108.67259ms","hash":398787122,"current-db-size-bytes":3440640,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":2007040,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-04-01T11:33:21.286764Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":398787122,"revision":1088,"compact-revision":-1}
	
	
	==> kernel <==
	 11:35:34 up 14 min,  0 users,  load average: 0.85, 0.54, 0.35
	Linux ha-401500 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6b3a35c1df16] <==
	I0401 11:34:50.707327       1 main.go:250] Node ha-401500-m03 has CIDR [10.244.2.0/24] 
	I0401 11:35:00.723672       1 main.go:223] Handling node with IPs: map[172.19.153.73:{}]
	I0401 11:35:00.723722       1 main.go:227] handling current node
	I0401 11:35:00.723737       1 main.go:223] Handling node with IPs: map[172.19.149.50:{}]
	I0401 11:35:00.723744       1 main.go:250] Node ha-401500-m02 has CIDR [10.244.1.0/24] 
	I0401 11:35:00.724463       1 main.go:223] Handling node with IPs: map[172.19.145.208:{}]
	I0401 11:35:00.724553       1 main.go:250] Node ha-401500-m03 has CIDR [10.244.2.0/24] 
	I0401 11:35:10.734013       1 main.go:223] Handling node with IPs: map[172.19.153.73:{}]
	I0401 11:35:10.734322       1 main.go:227] handling current node
	I0401 11:35:10.734362       1 main.go:223] Handling node with IPs: map[172.19.149.50:{}]
	I0401 11:35:10.734424       1 main.go:250] Node ha-401500-m02 has CIDR [10.244.1.0/24] 
	I0401 11:35:10.734694       1 main.go:223] Handling node with IPs: map[172.19.145.208:{}]
	I0401 11:35:10.734775       1 main.go:250] Node ha-401500-m03 has CIDR [10.244.2.0/24] 
	I0401 11:35:20.744669       1 main.go:223] Handling node with IPs: map[172.19.153.73:{}]
	I0401 11:35:20.744730       1 main.go:227] handling current node
	I0401 11:35:20.744743       1 main.go:223] Handling node with IPs: map[172.19.149.50:{}]
	I0401 11:35:20.744750       1 main.go:250] Node ha-401500-m02 has CIDR [10.244.1.0/24] 
	I0401 11:35:20.745102       1 main.go:223] Handling node with IPs: map[172.19.145.208:{}]
	I0401 11:35:20.745136       1 main.go:250] Node ha-401500-m03 has CIDR [10.244.2.0/24] 
	I0401 11:35:30.758646       1 main.go:223] Handling node with IPs: map[172.19.153.73:{}]
	I0401 11:35:30.758783       1 main.go:227] handling current node
	I0401 11:35:30.758817       1 main.go:223] Handling node with IPs: map[172.19.149.50:{}]
	I0401 11:35:30.758826       1 main.go:250] Node ha-401500-m02 has CIDR [10.244.1.0/24] 
	I0401 11:35:30.759271       1 main.go:223] Handling node with IPs: map[172.19.145.208:{}]
	I0401 11:35:30.759306       1 main.go:250] Node ha-401500-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [c01764f3eda1] <==
	I0401 11:23:26.567024       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 11:23:26.917294       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 11:23:28.464351       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 11:23:28.488431       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 11:23:28.531752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 11:23:40.495640       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0401 11:23:40.839168       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0401 11:30:42.589017       1 trace.go:236] Trace[1540639200]: "Update" accept:application/json, */*,audit-id:44480bd1-3ac5-479c-9676-96bf1c895e58,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (01-Apr-2024 11:30:42.016) (total time: 572ms):
	Trace[1540639200]: ["GuaranteedUpdate etcd3" audit-id:44480bd1-3ac5-479c-9676-96bf1c895e58,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 571ms (11:30:42.017)
	Trace[1540639200]:  ---"Txn call completed" 570ms (11:30:42.588)]
	Trace[1540639200]: [572.011955ms] [572.011955ms] END
	I0401 11:30:57.241001       1 trace.go:236] Trace[1094372534]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.153.73,type:*v1.Endpoints,resource:apiServerIPInfo (01-Apr-2024 11:30:56.541) (total time: 699ms):
	Trace[1094372534]: ---"Transaction prepared" 290ms (11:30:56.841)
	Trace[1094372534]: ---"Txn call completed" 399ms (11:30:57.240)
	Trace[1094372534]: [699.175936ms] [699.175936ms] END
	E0401 11:31:31.719000       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0401 11:31:31.719158       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0401 11:31:31.719481       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 146.002µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0401 11:31:31.721029       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0401 11:31:31.721293       1 timeout.go:142] post-timeout activity - time-elapsed: 2.35833ms, PATCH "/api/v1/namespaces/default/events/ha-401500-m03.17c224a2ffeae3ac" result: <nil>
	I0401 11:31:37.100572       1 trace.go:236] Trace[1790901363]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.153.73,type:*v1.Endpoints,resource:apiServerIPInfo (01-Apr-2024 11:31:36.543) (total time: 556ms):
	Trace[1790901363]: ---"initial value restored" 255ms (11:31:36.799)
	Trace[1790901363]: ---"Transaction prepared" 193ms (11:31:36.993)
	Trace[1790901363]: ---"Txn call completed" 106ms (11:31:37.100)
	Trace[1790901363]: [556.538778ms] [556.538778ms] END
	
	
	==> kube-controller-manager [d563352b3319] <==
	I0401 11:34:25.916794       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0401 11:34:25.968779       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-q7xs6"
	I0401 11:34:26.016978       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gr89z"
	I0401 11:34:26.023050       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-f5xk7"
	I0401 11:34:26.077049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="159.820642ms"
	I0401 11:34:26.113985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="36.46476ms"
	I0401 11:34:26.262863       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-29sln"
	I0401 11:34:26.310503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="195.90181ms"
	I0401 11:34:26.416015       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-p4h7g"
	I0401 11:34:26.451466       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-b288j"
	I0401 11:34:26.487399       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-ljrnq"
	I0401 11:34:26.491768       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-hfdfm"
	I0401 11:34:26.525192       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-6cnqv"
	I0401 11:34:26.601862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="291.262082ms"
	I0401 11:34:26.659363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.093824ms"
	I0401 11:34:26.660047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="144.299µs"
	I0401 11:34:26.768706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="36.217961ms"
	I0401 11:34:26.769545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="80.6µs"
	I0401 11:34:27.949641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="87.499µs"
	I0401 11:34:28.899535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="94.19232ms"
	I0401 11:34:28.899757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="131.698µs"
	I0401 11:34:29.050229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="65.261491ms"
	I0401 11:34:29.050456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="65.099µs"
	I0401 11:34:29.397821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.797822ms"
	I0401 11:34:29.397893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.3µs"
	
	
	==> kube-proxy [3b771f391aa2] <==
	I0401 11:23:42.238608       1 server_others.go:72] "Using iptables proxy"
	I0401 11:23:42.258527       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.153.73"]
	I0401 11:23:42.454730       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 11:23:42.454794       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 11:23:42.454828       1 server_others.go:168] "Using iptables Proxier"
	I0401 11:23:42.460899       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 11:23:42.462244       1 server.go:865] "Version info" version="v1.29.3"
	I0401 11:23:42.462365       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 11:23:42.468342       1 config.go:97] "Starting endpoint slice config controller"
	I0401 11:23:42.468458       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 11:23:42.468664       1 config.go:188] "Starting service config controller"
	I0401 11:23:42.468747       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 11:23:42.469475       1 config.go:315] "Starting node config controller"
	I0401 11:23:42.469930       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 11:23:42.479975       1 shared_informer.go:318] Caches are synced for node config
	I0401 11:23:42.569506       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 11:23:42.569520       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [57c210811c20] <==
	W0401 11:23:25.227548       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 11:23:25.227646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 11:23:25.229694       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 11:23:25.229883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0401 11:23:25.279270       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 11:23:25.279369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 11:23:25.421887       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 11:23:25.422345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 11:23:25.496793       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 11:23:25.497047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 11:23:25.499516       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 11:23:25.500167       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 11:23:25.591673       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 11:23:25.591922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:23:25.605803       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 11:23:25.605836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0401 11:23:28.596668       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0401 11:31:31.057365       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8f8ts\": pod kindnet-8f8ts is already assigned to node \"ha-401500-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8f8ts" node="ha-401500-m03"
	E0401 11:31:31.057806       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod bd227165-7098-4498-8ba6-6f903edfef84(kube-system/kindnet-8f8ts) wasn't assumed so cannot be forgotten"
	E0401 11:31:31.058211       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8f8ts\": pod kindnet-8f8ts is already assigned to node \"ha-401500-m03\"" pod="kube-system/kindnet-8f8ts"
	I0401 11:31:31.058243       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8f8ts" node="ha-401500-m03"
	E0401 11:31:31.072658       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ccgpw\": pod kube-proxy-ccgpw is already assigned to node \"ha-401500-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ccgpw" node="ha-401500-m03"
	E0401 11:31:31.072838       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod e8debcf2-d756-4fc4-9931-102b1eef4ee5(kube-system/kube-proxy-ccgpw) wasn't assumed so cannot be forgotten"
	E0401 11:31:31.072895       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ccgpw\": pod kube-proxy-ccgpw is already assigned to node \"ha-401500-m03\"" pod="kube-system/kube-proxy-ccgpw"
	I0401 11:31:31.073021       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ccgpw" node="ha-401500-m03"
	
	
	==> kubelet <==
	Apr 01 11:31:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:31:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:32:28 ha-401500 kubelet[2862]: E0401 11:32:28.858980    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:32:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:32:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:32:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:32:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:33:28 ha-401500 kubelet[2862]: E0401 11:33:28.859765    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:33:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:33:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:33:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:33:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:34:26 ha-401500 kubelet[2862]: I0401 11:34:26.075914    2862 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=637.070958259 podStartE2EDuration="10m37.070958259s" podCreationTimestamp="2024-04-01 11:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-01 11:23:52.190416188 +0000 UTC m=+23.776397591" watchObservedRunningTime="2024-04-01 11:34:26.070958259 +0000 UTC m=+657.656939662"
	Apr 01 11:34:26 ha-401500 kubelet[2862]: I0401 11:34:26.077490    2862 topology_manager.go:215] "Topology Admit Handler" podUID="7ffe0368-8016-42bf-8427-061b631500ce" podNamespace="default" podName="busybox-7fdf7869d9-f5xk7"
	Apr 01 11:34:26 ha-401500 kubelet[2862]: I0401 11:34:26.225621    2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6c98\" (UniqueName: \"kubernetes.io/projected/7ffe0368-8016-42bf-8427-061b631500ce-kube-api-access-n6c98\") pod \"busybox-7fdf7869d9-f5xk7\" (UID: \"7ffe0368-8016-42bf-8427-061b631500ce\") " pod="default/busybox-7fdf7869d9-f5xk7"
	Apr 01 11:34:28 ha-401500 kubelet[2862]: E0401 11:34:28.870900    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:34:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:34:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:34:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:34:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:35:28 ha-401500 kubelet[2862]: E0401 11:35:28.859760    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:35:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:35:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:35:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:35:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:35:25.411223    6608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-401500 -n ha-401500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-401500 -n ha-401500: (13.0464397s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-401500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (72.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (598.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 status --output json -v=7 --alsologtostderr: (51.4365927s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp testdata\cp-test.txt ha-401500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp testdata\cp-test.txt ha-401500:/home/docker/cp-test.txt: (10.3089983s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt": (10.1910335s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500.txt: (10.2681919s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt": (10.1077614s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500:/home/docker/cp-test.txt ha-401500-m02:/home/docker/cp-test_ha-401500_ha-401500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500:/home/docker/cp-test.txt ha-401500-m02:/home/docker/cp-test_ha-401500_ha-401500-m02.txt: (17.7273948s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt": (10.2406907s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test_ha-401500_ha-401500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test_ha-401500_ha-401500-m02.txt": (10.1228487s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500:/home/docker/cp-test.txt ha-401500-m03:/home/docker/cp-test_ha-401500_ha-401500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500:/home/docker/cp-test.txt ha-401500-m03:/home/docker/cp-test_ha-401500_ha-401500-m03.txt: (17.7998724s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt"
E0401 11:43:23.458219    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt": (10.1925785s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test_ha-401500_ha-401500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test_ha-401500_ha-401500-m03.txt": (10.2691089s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500:/home/docker/cp-test.txt ha-401500-m04:/home/docker/cp-test_ha-401500_ha-401500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500:/home/docker/cp-test.txt ha-401500-m04:/home/docker/cp-test_ha-401500_ha-401500-m04.txt: (17.8651076s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test.txt": (10.1764551s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test_ha-401500_ha-401500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test_ha-401500_ha-401500-m04.txt": (10.1241987s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp testdata\cp-test.txt ha-401500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp testdata\cp-test.txt ha-401500-m02:/home/docker/cp-test.txt: (10.2164839s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt": (10.1648712s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500-m02.txt: (10.1743913s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt": (10.1129598s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt ha-401500:/home/docker/cp-test_ha-401500-m02_ha-401500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt ha-401500:/home/docker/cp-test_ha-401500-m02_ha-401500.txt: (17.8324887s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt": (10.1160969s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test_ha-401500-m02_ha-401500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test_ha-401500-m02_ha-401500.txt": (10.2033195s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt ha-401500-m03:/home/docker/cp-test_ha-401500-m02_ha-401500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt ha-401500-m03:/home/docker/cp-test_ha-401500-m02_ha-401500-m03.txt: (17.7694448s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt": (10.176216s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test_ha-401500-m02_ha-401500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test_ha-401500-m02_ha-401500-m03.txt": (10.1835472s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt ha-401500-m04:/home/docker/cp-test_ha-401500-m02_ha-401500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt ha-401500-m04:/home/docker/cp-test_ha-401500-m02_ha-401500-m04.txt: (17.6718963s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test.txt": (10.1235568s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test_ha-401500-m02_ha-401500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test_ha-401500-m02_ha-401500-m04.txt": (10.1646286s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp testdata\cp-test.txt ha-401500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp testdata\cp-test.txt ha-401500-m03:/home/docker/cp-test.txt: (10.159718s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt": (10.1841982s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500-m03.txt: (10.0344018s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt": (10.137361s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt ha-401500:/home/docker/cp-test_ha-401500-m03_ha-401500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt ha-401500:/home/docker/cp-test_ha-401500-m03_ha-401500.txt: (17.6691661s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt": (10.1119231s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test_ha-401500-m03_ha-401500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test_ha-401500-m03_ha-401500.txt": (10.2058072s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt ha-401500-m02:/home/docker/cp-test_ha-401500-m03_ha-401500-m02.txt
E0401 11:48:23.456626    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt ha-401500-m02:/home/docker/cp-test_ha-401500-m03_ha-401500-m02.txt: (17.6757715s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt": (10.1267866s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test_ha-401500-m03_ha-401500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test_ha-401500-m03_ha-401500-m02.txt": (10.0338075s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt ha-401500-m04:/home/docker/cp-test_ha-401500-m03_ha-401500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt ha-401500-m04:/home/docker/cp-test_ha-401500-m03_ha-401500-m04.txt: (17.6474959s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test.txt": (10.1316151s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test_ha-401500-m03_ha-401500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test_ha-401500-m03_ha-401500-m04.txt": (10.1636131s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp testdata\cp-test.txt ha-401500-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp testdata\cp-test.txt ha-401500-m04:/home/docker/cp-test.txt: (10.1536078s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt": (10.0927166s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500-m04.txt: (10.2113595s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt": (10.1940339s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500:/home/docker/cp-test_ha-401500-m04_ha-401500.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500:/home/docker/cp-test_ha-401500-m04_ha-401500.txt: exit status 1 (3.4136829s)

                                                
                                                
** stderr ** 
	W0401 11:50:06.491986   13448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500:/home/docker/cp-test_ha-401500-m04_ha-401500.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500:/home/docker/cp-test_ha-401500-m04_ha-401500.txt" : exit status 1
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500 \"sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500.txt\"" : context deadline exceeded
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500-m02:/home/docker/cp-test_ha-401500-m04_ha-401500-m02.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500-m02:/home/docker/cp-test_ha-401500-m04_ha-401500-m02.txt: context deadline exceeded (0s)
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500-m02:/home/docker/cp-test_ha-401500-m04_ha-401500-m02.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500-m02:/home/docker/cp-test_ha-401500-m04_ha-401500-m02.txt" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500-m02.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500-m02.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500-m02.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m02 \"sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500-m02.txt\"" : context deadline exceeded
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500-m03:/home/docker/cp-test_ha-401500-m04_ha-401500-m03.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500-m03:/home/docker/cp-test_ha-401500-m04_ha-401500-m03.txt: context deadline exceeded (0s)
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500-m03:/home/docker/cp-test_ha-401500-m04_ha-401500-m03.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt ha-401500-m03:/home/docker/cp-test_ha-401500-m04_ha-401500-m03.txt" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500-m03.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500-m03.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 "sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500-m03.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-401500 ssh -n ha-401500-m03 \"sudo cat /home/docker/cp-test_ha-401500-m04_ha-401500-m03.txt\"" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-401500 -n ha-401500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-401500 -n ha-401500: (13.1561821s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 logs -n 25: (9.7767184s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| ssh     | ha-401500 ssh -n ha-401500 sudo cat                                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:45 UTC | 01 Apr 24 11:45 UTC |
	|         | /home/docker/cp-test_ha-401500-m02_ha-401500.txt                                                                          |           |                   |                |                     |                     |
	| cp      | ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:45 UTC | 01 Apr 24 11:45 UTC |
	|         | ha-401500-m03:/home/docker/cp-test_ha-401500-m02_ha-401500-m03.txt                                                        |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:45 UTC | 01 Apr 24 11:46 UTC |
	|         | ha-401500-m02 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n ha-401500-m03 sudo cat                                                                                   | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:46 UTC | 01 Apr 24 11:46 UTC |
	|         | /home/docker/cp-test_ha-401500-m02_ha-401500-m03.txt                                                                      |           |                   |                |                     |                     |
	| cp      | ha-401500 cp ha-401500-m02:/home/docker/cp-test.txt                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:46 UTC | 01 Apr 24 11:46 UTC |
	|         | ha-401500-m04:/home/docker/cp-test_ha-401500-m02_ha-401500-m04.txt                                                        |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:46 UTC | 01 Apr 24 11:46 UTC |
	|         | ha-401500-m02 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n ha-401500-m04 sudo cat                                                                                   | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:46 UTC | 01 Apr 24 11:46 UTC |
	|         | /home/docker/cp-test_ha-401500-m02_ha-401500-m04.txt                                                                      |           |                   |                |                     |                     |
	| cp      | ha-401500 cp testdata\cp-test.txt                                                                                         | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:46 UTC | 01 Apr 24 11:47 UTC |
	|         | ha-401500-m03:/home/docker/cp-test.txt                                                                                    |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:47 UTC | 01 Apr 24 11:47 UTC |
	|         | ha-401500-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| cp      | ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:47 UTC | 01 Apr 24 11:47 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500-m03.txt |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:47 UTC | 01 Apr 24 11:47 UTC |
	|         | ha-401500-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| cp      | ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:47 UTC | 01 Apr 24 11:47 UTC |
	|         | ha-401500:/home/docker/cp-test_ha-401500-m03_ha-401500.txt                                                                |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:47 UTC | 01 Apr 24 11:47 UTC |
	|         | ha-401500-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n ha-401500 sudo cat                                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:47 UTC | 01 Apr 24 11:48 UTC |
	|         | /home/docker/cp-test_ha-401500-m03_ha-401500.txt                                                                          |           |                   |                |                     |                     |
	| cp      | ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:48 UTC | 01 Apr 24 11:48 UTC |
	|         | ha-401500-m02:/home/docker/cp-test_ha-401500-m03_ha-401500-m02.txt                                                        |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:48 UTC | 01 Apr 24 11:48 UTC |
	|         | ha-401500-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n ha-401500-m02 sudo cat                                                                                   | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:48 UTC | 01 Apr 24 11:48 UTC |
	|         | /home/docker/cp-test_ha-401500-m03_ha-401500-m02.txt                                                                      |           |                   |                |                     |                     |
	| cp      | ha-401500 cp ha-401500-m03:/home/docker/cp-test.txt                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:48 UTC | 01 Apr 24 11:49 UTC |
	|         | ha-401500-m04:/home/docker/cp-test_ha-401500-m03_ha-401500-m04.txt                                                        |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:49 UTC | 01 Apr 24 11:49 UTC |
	|         | ha-401500-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n ha-401500-m04 sudo cat                                                                                   | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:49 UTC | 01 Apr 24 11:49 UTC |
	|         | /home/docker/cp-test_ha-401500-m03_ha-401500-m04.txt                                                                      |           |                   |                |                     |                     |
	| cp      | ha-401500 cp testdata\cp-test.txt                                                                                         | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:49 UTC | 01 Apr 24 11:49 UTC |
	|         | ha-401500-m04:/home/docker/cp-test.txt                                                                                    |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:49 UTC | 01 Apr 24 11:49 UTC |
	|         | ha-401500-m04 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| cp      | ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:49 UTC | 01 Apr 24 11:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2057462073\001\cp-test_ha-401500-m04.txt |           |                   |                |                     |                     |
	| ssh     | ha-401500 ssh -n                                                                                                          | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:49 UTC | 01 Apr 24 11:50 UTC |
	|         | ha-401500-m04 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| cp      | ha-401500 cp ha-401500-m04:/home/docker/cp-test.txt                                                                       | ha-401500 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 11:50 UTC |                     |
	|         | ha-401500:/home/docker/cp-test_ha-401500-m04_ha-401500.txt                                                                |           |                   |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 11:20:09
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 11:20:09.958181   12872 out.go:291] Setting OutFile to fd 1008 ...
	I0401 11:20:09.958812   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:20:09.958812   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:20:09.958812   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:20:09.986028   12872 out.go:298] Setting JSON to false
	I0401 11:20:09.991015   12872 start.go:129] hostinfo: {"hostname":"minikube6","uptime":313168,"bootTime":1711657241,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 11:20:09.991015   12872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 11:20:09.995348   12872 out.go:177] * [ha-401500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 11:20:09.998749   12872 notify.go:220] Checking for updates...
	I0401 11:20:09.999725   12872 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:20:10.001767   12872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 11:20:10.003754   12872 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 11:20:10.006745   12872 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 11:20:10.008770   12872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 11:20:10.011758   12872 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 11:20:15.668057   12872 out.go:177] * Using the hyperv driver based on user configuration
	I0401 11:20:15.671592   12872 start.go:297] selected driver: hyperv
	I0401 11:20:15.671592   12872 start.go:901] validating driver "hyperv" against <nil>
	I0401 11:20:15.671592   12872 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 11:20:15.724851   12872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 11:20:15.726083   12872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:20:15.726259   12872 cni.go:84] Creating CNI manager for ""
	I0401 11:20:15.726259   12872 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0401 11:20:15.726259   12872 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 11:20:15.726259   12872 start.go:340] cluster config:
	{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:20:15.726259   12872 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 11:20:15.730654   12872 out.go:177] * Starting "ha-401500" primary control-plane node in "ha-401500" cluster
	I0401 11:20:15.732981   12872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:20:15.732981   12872 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 11:20:15.732981   12872 cache.go:56] Caching tarball of preloaded images
	I0401 11:20:15.733502   12872 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:20:15.733669   12872 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:20:15.734199   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:20:15.734428   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json: {Name:mkee2f372bb024ea4eb6a289a94c70141fb4b78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:15.735787   12872 start.go:360] acquireMachinesLock for ha-401500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:20:15.735787   12872 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-401500"
	I0401 11:20:15.735787   12872 start.go:93] Provisioning new machine with config: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:20:15.735787   12872 start.go:125] createHost starting for "" (driver="hyperv")
	I0401 11:20:15.740904   12872 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 11:20:15.741569   12872 start.go:159] libmachine.API.Create for "ha-401500" (driver="hyperv")
	I0401 11:20:15.741569   12872 client.go:168] LocalClient.Create starting
	I0401 11:20:15.741761   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 11:20:15.741761   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:20:15.742313   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:20:15.742444   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 11:20:15.742774   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:20:15.742820   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:20:15.742929   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 11:20:17.947596   12872 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 11:20:17.947596   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:17.948558   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 11:20:19.783159   12872 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 11:20:19.783159   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:19.783234   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:20:21.366910   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:20:21.367902   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:21.368117   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:20:25.159543   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:20:25.159603   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:25.161994   12872 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 11:20:25.689799   12872 main.go:141] libmachine: Creating SSH key...
	I0401 11:20:25.889733   12872 main.go:141] libmachine: Creating VM...
	I0401 11:20:25.890753   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:20:28.865647   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:20:28.866120   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:28.866120   12872 main.go:141] libmachine: Using switch "Default Switch"
	I0401 11:20:28.866120   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:20:30.786591   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:20:30.786715   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:30.786715   12872 main.go:141] libmachine: Creating VHD
	I0401 11:20:30.786804   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 11:20:34.649179   12872 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6418C629-8011-4DB6-A5A9-1C2F45A7C7FA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 11:20:34.649946   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:34.649946   12872 main.go:141] libmachine: Writing magic tar header
	I0401 11:20:34.650071   12872 main.go:141] libmachine: Writing SSH key tar header
	I0401 11:20:34.664142   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 11:20:38.034051   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:38.034051   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:38.034051   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\disk.vhd' -SizeBytes 20000MB
	I0401 11:20:40.771482   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:40.771665   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:40.771708   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-401500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0401 11:20:44.710669   12872 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-401500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 11:20:44.710669   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:44.710669   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-401500 -DynamicMemoryEnabled $false
	I0401 11:20:47.152774   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:47.153330   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:47.153330   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-401500 -Count 2
	I0401 11:20:49.427854   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:49.428044   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:49.428155   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-401500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\boot2docker.iso'
	I0401 11:20:52.196449   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:52.196613   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:52.196719   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-401500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\disk.vhd'
	I0401 11:20:54.959360   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:54.963381   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:54.963381   12872 main.go:141] libmachine: Starting VM...
	I0401 11:20:54.963445   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-401500
	I0401 11:20:58.184682   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:20:58.184682   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:20:58.184682   12872 main.go:141] libmachine: Waiting for host to start...
	I0401 11:20:58.184817   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:00.533680   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:00.533718   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:00.533846   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:03.217674   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:21:03.217674   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:04.228448   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:06.565338   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:06.565400   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:06.565400   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:09.240020   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:21:09.240020   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:10.244815   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:12.584660   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:12.585756   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:12.586036   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:15.210292   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:21:15.210292   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:16.211797   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:18.565314   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:18.565314   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:18.565420   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:21.222497   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:21:21.222497   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:22.233375   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:24.585674   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:24.585674   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:24.586316   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:27.654188   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:27.654188   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:27.654188   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:29.915773   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:29.915773   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:29.916055   12872 machine.go:94] provisionDockerMachine start ...
	I0401 11:21:29.916210   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:32.192000   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:32.192000   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:32.192000   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:34.971209   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:34.971537   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:34.977441   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:21:34.988195   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:21:34.988195   12872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:21:35.128782   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 11:21:35.128886   12872 buildroot.go:166] provisioning hostname "ha-401500"
	I0401 11:21:35.128990   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:37.435002   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:37.435830   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:37.435912   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:40.140343   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:40.141396   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:40.147044   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:21:40.147600   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:21:40.147744   12872 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401500 && echo "ha-401500" | sudo tee /etc/hostname
	I0401 11:21:40.304874   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401500
	
	I0401 11:21:40.304874   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:42.551086   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:42.551086   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:42.551086   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:45.245322   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:45.245630   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:45.251871   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:21:45.252528   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:21:45.252528   12872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:21:45.411665   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:21:45.411665   12872 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:21:45.411665   12872 buildroot.go:174] setting up certificates
	I0401 11:21:45.411665   12872 provision.go:84] configureAuth start
	I0401 11:21:45.411665   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:47.697645   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:47.697843   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:47.697941   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:50.426307   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:50.426866   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:50.426866   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:52.699545   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:52.699545   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:52.699625   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:21:55.463663   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:21:55.463663   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:55.464243   12872 provision.go:143] copyHostCerts
	I0401 11:21:55.464391   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 11:21:55.464416   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:21:55.464416   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:21:55.464976   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:21:55.466381   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 11:21:55.466616   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:21:55.466684   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:21:55.466929   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:21:55.467973   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 11:21:55.468222   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:21:55.468271   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:21:55.469302   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:21:55.470588   12872 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-401500 san=[127.0.0.1 172.19.153.73 ha-401500 localhost minikube]
	I0401 11:21:55.991291   12872 provision.go:177] copyRemoteCerts
	I0401 11:21:56.006086   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:21:56.006086   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:21:58.299042   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:21:58.299894   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:21:58.299981   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:01.018248   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:01.018687   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:01.018751   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:22:01.133762   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1275788s)
	I0401 11:22:01.133833   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 11:22:01.133833   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:22:01.181330   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 11:22:01.181330   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0401 11:22:01.241071   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 11:22:01.241550   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 11:22:01.303683   12872 provision.go:87] duration metric: took 15.8918637s to configureAuth
	I0401 11:22:01.303724   12872 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:22:01.304104   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:22:01.304104   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:03.575632   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:03.575632   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:03.575737   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:06.315013   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:06.315770   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:06.322465   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:06.322611   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:06.322611   12872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:22:06.453162   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:22:06.453162   12872 buildroot.go:70] root file system type: tmpfs
	I0401 11:22:06.453162   12872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:22:06.453711   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:08.715812   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:08.715812   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:08.716801   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:11.409628   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:11.409785   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:11.415242   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:11.415945   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:11.415945   12872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:22:11.579718   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:22:11.579831   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:13.842278   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:13.842367   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:13.842367   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:16.481195   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:16.481589   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:16.487110   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:16.487732   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:16.487732   12872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:22:18.653185   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 11:22:18.653185   12872 machine.go:97] duration metric: took 48.7367321s to provisionDockerMachine
	I0401 11:22:18.653185   12872 client.go:171] duration metric: took 2m2.9107434s to LocalClient.Create
	I0401 11:22:18.653185   12872 start.go:167] duration metric: took 2m2.9107434s to libmachine.API.Create "ha-401500"
	I0401 11:22:18.653185   12872 start.go:293] postStartSetup for "ha-401500" (driver="hyperv")
	I0401 11:22:18.653185   12872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:22:18.667493   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:22:18.667493   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:20.911183   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:20.911183   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:20.911416   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:23.591542   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:23.591542   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:23.592668   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:22:23.699451   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0319221s)
	I0401 11:22:23.714122   12872 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:22:23.722998   12872 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:22:23.722998   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:22:23.722998   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:22:23.724676   12872 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:22:23.724727   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 11:22:23.738491   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 11:22:23.759003   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:22:23.814524   12872 start.go:296] duration metric: took 5.1608412s for postStartSetup
	I0401 11:22:23.817147   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:26.064866   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:26.065075   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:26.065075   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:28.755698   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:28.755698   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:28.756594   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:22:28.759267   12872 start.go:128] duration metric: took 2m13.022536s to createHost
	I0401 11:22:28.759802   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:31.018851   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:31.018851   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:31.018851   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:33.745186   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:33.745186   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:33.752656   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:33.752916   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:33.752916   12872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:22:33.891104   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711970553.885159057
	
	I0401 11:22:33.891104   12872 fix.go:216] guest clock: 1711970553.885159057
	I0401 11:22:33.891210   12872 fix.go:229] Guest: 2024-04-01 11:22:33.885159057 +0000 UTC Remote: 2024-04-01 11:22:28.7592675 +0000 UTC m=+138.992378801 (delta=5.125891557s)
	I0401 11:22:33.891282   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:36.213569   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:36.213569   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:36.213831   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:38.921844   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:38.922359   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:38.929284   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:22:38.929723   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.153.73 22 <nil> <nil>}
	I0401 11:22:38.930296   12872 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711970553
	I0401 11:22:39.083238   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:22:33 UTC 2024
	
	I0401 11:22:39.084235   12872 fix.go:236] clock set: Mon Apr  1 11:22:33 UTC 2024
	 (err=<nil>)
	I0401 11:22:39.084235   12872 start.go:83] releasing machines lock for "ha-401500", held for 2m23.3474302s
	I0401 11:22:39.084235   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:41.288362   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:41.288504   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:41.288504   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:43.986086   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:43.986086   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:43.990874   12872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:22:43.990952   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:44.000596   12872 ssh_runner.go:195] Run: cat /version.json
	I0401 11:22:44.000596   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:22:46.324096   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:46.324096   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:46.324096   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:22:46.324238   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:46.324238   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:46.324238   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:22:49.097385   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:49.097385   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:49.098697   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:22:49.124947   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:22:49.125085   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:22:49.125761   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:22:49.203929   12872 ssh_runner.go:235] Completed: cat /version.json: (5.2031665s)
	I0401 11:22:49.216735   12872 ssh_runner.go:195] Run: systemctl --version
	I0401 11:22:49.274646   12872 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2837343s)
	I0401 11:22:49.287726   12872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 11:22:49.296404   12872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:22:49.307339   12872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:22:49.335423   12872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 11:22:49.335423   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:22:49.335833   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:22:49.387348   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:22:49.417911   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:22:49.439303   12872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:22:49.451130   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:22:49.488652   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:22:49.520112   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:22:49.553622   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:22:49.586659   12872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:22:49.620768   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:22:49.654826   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:22:49.686665   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:22:49.719959   12872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:22:49.756599   12872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:22:49.787956   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:22:50.020647   12872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:22:50.058435   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:22:50.071339   12872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:22:50.112278   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:22:50.154969   12872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:22:50.197536   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:22:50.232430   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:22:50.272028   12872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 11:22:50.339524   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:22:50.365778   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:22:50.415749   12872 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:22:50.435470   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:22:50.454740   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:22:50.503850   12872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:22:50.709536   12872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:22:50.901353   12872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:22:50.901613   12872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:22:50.950825   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:22:51.154131   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:22:53.722203   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5680541s)
	I0401 11:22:53.734660   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 11:22:53.771790   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:22:53.809943   12872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 11:22:54.037543   12872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 11:22:54.263629   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:22:54.484522   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 11:22:54.533419   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:22:54.576449   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:22:54.808859   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 11:22:54.916240   12872 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 11:22:54.928400   12872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 11:22:54.937390   12872 start.go:562] Will wait 60s for crictl version
	I0401 11:22:54.947387   12872 ssh_runner.go:195] Run: which crictl
	I0401 11:22:54.966760   12872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 11:22:55.046335   12872 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 11:22:55.060081   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:22:55.106961   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:22:55.145223   12872 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 11:22:55.145460   12872 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 11:22:55.151570   12872 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 11:22:55.151570   12872 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 11:22:55.151570   12872 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 11:22:55.151570   12872 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 11:22:55.154233   12872 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 11:22:55.154233   12872 ip.go:210] interface addr: 172.19.144.1/20
	I0401 11:22:55.167241   12872 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 11:22:55.175484   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:22:55.211626   12872 kubeadm.go:877] updating cluster {Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 11:22:55.211626   12872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:22:55.220685   12872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 11:22:55.244065   12872 docker.go:685] Got preloaded images: 
	I0401 11:22:55.244065   12872 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0401 11:22:55.257240   12872 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 11:22:55.293428   12872 ssh_runner.go:195] Run: which lz4
	I0401 11:22:55.300447   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0401 11:22:55.312718   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 11:22:55.317554   12872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 11:22:55.317554   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0401 11:22:57.476787   12872 docker.go:649] duration metric: took 2.1762419s to copy over tarball
	I0401 11:22:57.491136   12872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 11:23:06.343668   12872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8523846s)
	I0401 11:23:06.343668   12872 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 11:23:06.415209   12872 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 11:23:06.435162   12872 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0401 11:23:06.481453   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:23:06.719963   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:23:09.939933   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2199472s)
	I0401 11:23:09.950773   12872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 11:23:09.976251   12872 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0401 11:23:09.976251   12872 cache_images.go:84] Images are preloaded, skipping loading
	I0401 11:23:09.976251   12872 kubeadm.go:928] updating node { 172.19.153.73 8443 v1.29.3 docker true true} ...
	I0401 11:23:09.976251   12872 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-401500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.153.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 11:23:09.986760   12872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0401 11:23:10.023706   12872 cni.go:84] Creating CNI manager for ""
	I0401 11:23:10.023748   12872 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 11:23:10.023807   12872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 11:23:10.023807   12872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.153.73 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-401500 NodeName:ha-401500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.153.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.153.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 11:23:10.023807   12872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.153.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-401500"
	  kubeletExtraArgs:
	    node-ip: 172.19.153.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.153.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 11:23:10.023807   12872 kube-vip.go:111] generating kube-vip config ...
	I0401 11:23:10.038487   12872 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 11:23:10.071576   12872 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 11:23:10.071762   12872 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0401 11:23:10.086461   12872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 11:23:10.105265   12872 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 11:23:10.122378   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0401 11:23:10.145492   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0401 11:23:10.180363   12872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 11:23:10.217361   12872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0401 11:23:10.255972   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0401 11:23:10.302184   12872 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0401 11:23:10.309618   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:23:10.345391   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:23:10.575861   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:23:10.608738   12872 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500 for IP: 172.19.153.73
	I0401 11:23:10.608738   12872 certs.go:194] generating shared ca certs ...
	I0401 11:23:10.608931   12872 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:10.609677   12872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 11:23:10.610122   12872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 11:23:10.610183   12872 certs.go:256] generating profile certs ...
	I0401 11:23:10.611054   12872 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key
	I0401 11:23:10.611264   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.crt with IP's: []
	I0401 11:23:10.999213   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.crt ...
	I0401 11:23:10.999213   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.crt: {Name:mk509712757761f333b5c32ef54f4a38ffc199ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.001205   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key ...
	I0401 11:23:11.001205   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key: {Name:mkd4e7cd761140dd8d5f554482c5b9785b00f60a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.002212   12872 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.ebe7f2ea
	I0401 11:23:11.002212   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.ebe7f2ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.153.73 172.19.159.254]
	I0401 11:23:11.325644   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.ebe7f2ea ...
	I0401 11:23:11.325644   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.ebe7f2ea: {Name:mk12b87cb53027b4d13055127261e3a8281b77e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.327101   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.ebe7f2ea ...
	I0401 11:23:11.327101   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.ebe7f2ea: {Name:mk4a8764380b69ad826c3ae1d1a5760b71241788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.328352   12872 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.ebe7f2ea -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt
	I0401 11:23:11.340431   12872 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.ebe7f2ea -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key
	I0401 11:23:11.341356   12872 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key
	I0401 11:23:11.341356   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt with IP's: []
	I0401 11:23:11.534308   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt ...
	I0401 11:23:11.534308   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt: {Name:mkd5c65cb2feb76384684744ec21e6f206c25eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.535368   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key ...
	I0401 11:23:11.535368   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key: {Name:mk1d87f6b19e07b54fc72f7df7c27133de3a504e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:11.536336   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 11:23:11.537348   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 11:23:11.548957   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 11:23:11.549283   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem (1338 bytes)
	W0401 11:23:11.549862   12872 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260_empty.pem, impossibly tiny 0 bytes
	I0401 11:23:11.549999   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 11:23:11.550292   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 11:23:11.550600   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 11:23:11.550848   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 11:23:11.551406   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem (1708 bytes)
	I0401 11:23:11.551630   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:23:11.551845   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem -> /usr/share/ca-certificates/1260.pem
	I0401 11:23:11.552011   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /usr/share/ca-certificates/12602.pem
	I0401 11:23:11.553211   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 11:23:11.601569   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 11:23:11.658572   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 11:23:11.711279   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 11:23:11.763414   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 11:23:11.819548   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 11:23:11.869125   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 11:23:11.917510   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 11:23:11.970908   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 11:23:12.018072   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem --> /usr/share/ca-certificates/1260.pem (1338 bytes)
	I0401 11:23:12.069596   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /usr/share/ca-certificates/12602.pem (1708 bytes)
	I0401 11:23:12.117966   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 11:23:12.166035   12872 ssh_runner.go:195] Run: openssl version
	I0401 11:23:12.188523   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0401 11:23:12.222638   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0401 11:23:12.231406   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 11:23:12.244594   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0401 11:23:12.266371   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 11:23:12.297800   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 11:23:12.330603   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:23:12.339449   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:23:12.351872   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:23:12.374949   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 11:23:12.407913   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1260.pem && ln -fs /usr/share/ca-certificates/1260.pem /etc/ssl/certs/1260.pem"
	I0401 11:23:12.439205   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1260.pem
	I0401 11:23:12.446475   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 11:23:12.460911   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1260.pem
	I0401 11:23:12.487116   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1260.pem /etc/ssl/certs/51391683.0"
	I0401 11:23:12.526335   12872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 11:23:12.535271   12872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 11:23:12.535271   12872 kubeadm.go:391] StartCluster: {Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:23:12.546243   12872 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0401 11:23:12.590192   12872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 11:23:12.624196   12872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 11:23:12.655425   12872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 11:23:12.674488   12872 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 11:23:12.674488   12872 kubeadm.go:156] found existing configuration files:
	
	I0401 11:23:12.687469   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 11:23:12.703453   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 11:23:12.715853   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 11:23:12.747799   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 11:23:12.769066   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 11:23:12.780868   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 11:23:12.812081   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 11:23:12.829667   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 11:23:12.841860   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 11:23:12.871800   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 11:23:12.889761   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 11:23:12.904925   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 11:23:12.923333   12872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 11:23:13.449915   12872 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 11:23:28.609333   12872 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 11:23:28.609771   12872 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 11:23:28.609984   12872 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 11:23:28.610181   12872 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 11:23:28.610540   12872 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 11:23:28.610671   12872 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 11:23:28.613697   12872 out.go:204]   - Generating certificates and keys ...
	I0401 11:23:28.613697   12872 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 11:23:28.613697   12872 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 11:23:28.614375   12872 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 11:23:28.614534   12872 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 11:23:28.614696   12872 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 11:23:28.614841   12872 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 11:23:28.615021   12872 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 11:23:28.615399   12872 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-401500 localhost] and IPs [172.19.153.73 127.0.0.1 ::1]
	I0401 11:23:28.615553   12872 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 11:23:28.615855   12872 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-401500 localhost] and IPs [172.19.153.73 127.0.0.1 ::1]
	I0401 11:23:28.616037   12872 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 11:23:28.616216   12872 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 11:23:28.616394   12872 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 11:23:28.616511   12872 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 11:23:28.616701   12872 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 11:23:28.616872   12872 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 11:23:28.617062   12872 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 11:23:28.617263   12872 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 11:23:28.617307   12872 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 11:23:28.617307   12872 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 11:23:28.617307   12872 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 11:23:28.620723   12872 out.go:204]   - Booting up control plane ...
	I0401 11:23:28.620723   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 11:23:28.620723   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 11:23:28.620723   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 11:23:28.620723   12872 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 11:23:28.620723   12872 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 11:23:28.621743   12872 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 11:23:28.621743   12872 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 11:23:28.621743   12872 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.603521 seconds
	I0401 11:23:28.622348   12872 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 11:23:28.622348   12872 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 11:23:28.622348   12872 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 11:23:28.623361   12872 kubeadm.go:309] [mark-control-plane] Marking the node ha-401500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 11:23:28.623361   12872 kubeadm.go:309] [bootstrap-token] Using token: jgil8o.iynv4v6pgp2ssyrk
	I0401 11:23:28.628393   12872 out.go:204]   - Configuring RBAC rules ...
	I0401 11:23:28.628393   12872 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 11:23:28.628393   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 11:23:28.628393   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 11:23:28.629362   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 11:23:28.629362   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 11:23:28.629362   12872 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 11:23:28.629362   12872 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 11:23:28.629362   12872 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 11:23:28.629362   12872 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 11:23:28.629362   12872 kubeadm.go:309] 
	I0401 11:23:28.630386   12872 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 11:23:28.630386   12872 kubeadm.go:309] 
	I0401 11:23:28.630386   12872 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 11:23:28.630386   12872 kubeadm.go:309] 
	I0401 11:23:28.630386   12872 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 11:23:28.630386   12872 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 11:23:28.630998   12872 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 11:23:28.631085   12872 kubeadm.go:309] 
	I0401 11:23:28.631270   12872 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 11:23:28.631327   12872 kubeadm.go:309] 
	I0401 11:23:28.631471   12872 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 11:23:28.631471   12872 kubeadm.go:309] 
	I0401 11:23:28.631586   12872 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 11:23:28.631755   12872 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 11:23:28.631917   12872 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 11:23:28.631917   12872 kubeadm.go:309] 
	I0401 11:23:28.632096   12872 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 11:23:28.632284   12872 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 11:23:28.632284   12872 kubeadm.go:309] 
	I0401 11:23:28.632474   12872 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jgil8o.iynv4v6pgp2ssyrk \
	I0401 11:23:28.632683   12872 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c \
	I0401 11:23:28.632683   12872 kubeadm.go:309] 	--control-plane 
	I0401 11:23:28.632683   12872 kubeadm.go:309] 
	I0401 11:23:28.632893   12872 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 11:23:28.632893   12872 kubeadm.go:309] 
	I0401 11:23:28.633112   12872 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jgil8o.iynv4v6pgp2ssyrk \
	I0401 11:23:28.633309   12872 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c 
	I0401 11:23:28.633309   12872 cni.go:84] Creating CNI manager for ""
	I0401 11:23:28.633309   12872 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 11:23:28.634814   12872 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 11:23:28.654018   12872 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 11:23:28.665700   12872 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0401 11:23:28.665700   12872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0401 11:23:28.738304   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 11:23:29.551880   12872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 11:23:29.566744   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-401500 minikube.k8s.io/updated_at=2024_04_01T11_23_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=ha-401500 minikube.k8s.io/primary=true
	I0401 11:23:29.566744   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:29.618262   12872 ops.go:34] apiserver oom_adj: -16
	I0401 11:23:29.827749   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:30.340068   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:30.827634   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:31.335244   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:31.838270   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:32.340731   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:32.828250   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:33.329378   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:33.830279   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:34.328423   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:34.841549   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:35.342967   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:35.835091   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:36.336315   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:36.839066   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:37.328203   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:37.837944   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:38.339394   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:38.842372   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:39.328943   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:39.831299   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:40.339048   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 11:23:40.553047   12872 kubeadm.go:1107] duration metric: took 11.0009865s to wait for elevateKubeSystemPrivileges
	W0401 11:23:40.553047   12872 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 11:23:40.553047   12872 kubeadm.go:393] duration metric: took 28.0175775s to StartCluster
	I0401 11:23:40.553047   12872 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:40.553047   12872 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:23:40.555044   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:23:40.556046   12872 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:23:40.556046   12872 start.go:240] waiting for startup goroutines ...
	I0401 11:23:40.556046   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 11:23:40.556046   12872 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 11:23:40.556046   12872 addons.go:69] Setting storage-provisioner=true in profile "ha-401500"
	I0401 11:23:40.556046   12872 addons.go:69] Setting default-storageclass=true in profile "ha-401500"
	I0401 11:23:40.556046   12872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-401500"
	I0401 11:23:40.556046   12872 addons.go:234] Setting addon storage-provisioner=true in "ha-401500"
	I0401 11:23:40.557039   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:23:40.557039   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:23:40.557039   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:40.558047   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:40.805041   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 11:23:41.333371   12872 start.go:946] {"host.minikube.internal": 172.19.144.1} host record injected into CoreDNS's ConfigMap
	I0401 11:23:42.967628   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:42.967800   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:42.967800   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:42.967800   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:42.970171   12872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 11:23:42.968492   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:23:42.973232   12872 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:23:42.973267   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 11:23:42.973362   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:42.974278   12872 kapi.go:59] client config for ha-401500: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x236fd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 11:23:42.975889   12872 cert_rotation.go:137] Starting client certificate rotation controller
	I0401 11:23:42.975889   12872 addons.go:234] Setting addon default-storageclass=true in "ha-401500"
	I0401 11:23:42.976418   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:23:42.977068   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:45.393478   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:45.393771   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:45.393844   12872 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 11:23:45.393844   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 11:23:45.393844   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:23:45.402206   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:45.402206   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:45.402206   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:23:47.775275   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:23:47.776051   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:47.776051   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:23:48.287708   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:23:48.287708   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:48.288543   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:23:48.448315   12872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:23:50.571629   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:23:50.572307   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:50.572854   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:23:50.728200   12872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 11:23:51.095294   12872 round_trippers.go:463] GET https://172.19.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0401 11:23:51.095294   12872 round_trippers.go:469] Request Headers:
	I0401 11:23:51.095294   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:23:51.095294   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:23:51.109752   12872 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0401 11:23:51.111289   12872 round_trippers.go:463] PUT https://172.19.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0401 11:23:51.111289   12872 round_trippers.go:469] Request Headers:
	I0401 11:23:51.111289   12872 round_trippers.go:473]     Content-Type: application/json
	I0401 11:23:51.111289   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:23:51.111289   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:23:51.114808   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:23:51.122092   12872 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 11:23:51.128488   12872 addons.go:505] duration metric: took 10.5723671s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 11:23:51.128488   12872 start.go:245] waiting for cluster config update ...
	I0401 11:23:51.128488   12872 start.go:254] writing updated cluster config ...
	I0401 11:23:51.131226   12872 out.go:177] 
	I0401 11:23:51.143078   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:23:51.143610   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:23:51.151527   12872 out.go:177] * Starting "ha-401500-m02" control-plane node in "ha-401500" cluster
	I0401 11:23:51.157510   12872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:23:51.158467   12872 cache.go:56] Caching tarball of preloaded images
	I0401 11:23:51.158467   12872 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:23:51.158467   12872 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:23:51.158467   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:23:51.161485   12872 start.go:360] acquireMachinesLock for ha-401500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:23:51.161485   12872 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-401500-m02"
	I0401 11:23:51.162475   12872 start.go:93] Provisioning new machine with config: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:23:51.162475   12872 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0401 11:23:51.166526   12872 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 11:23:51.166526   12872 start.go:159] libmachine.API.Create for "ha-401500" (driver="hyperv")
	I0401 11:23:51.166526   12872 client.go:168] LocalClient.Create starting
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:23:51.167479   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:23:51.167479   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 11:23:53.230681   12872 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 11:23:53.230863   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:53.230863   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 11:23:55.067655   12872 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 11:23:55.067655   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:55.067655   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:23:56.632719   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:23:56.632719   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:23:56.632719   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:24:00.448816   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:24:00.448816   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:00.451201   12872 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 11:24:00.984253   12872 main.go:141] libmachine: Creating SSH key...
	I0401 11:24:01.124196   12872 main.go:141] libmachine: Creating VM...
	I0401 11:24:01.124196   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:24:04.122176   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:24:04.122303   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:04.122303   12872 main.go:141] libmachine: Using switch "Default Switch"
	I0401 11:24:04.122399   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:24:06.000973   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:24:06.000973   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:06.000973   12872 main.go:141] libmachine: Creating VHD
	I0401 11:24:06.000973   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 11:24:09.897500   12872 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : EECA16FE-B004-4547-B8DE-1C1C2D9B142B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 11:24:09.898250   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:09.898250   12872 main.go:141] libmachine: Writing magic tar header
	I0401 11:24:09.898250   12872 main.go:141] libmachine: Writing SSH key tar header
	I0401 11:24:09.907972   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 11:24:13.169670   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:13.170819   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:13.170890   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\disk.vhd' -SizeBytes 20000MB
	I0401 11:24:15.764189   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:15.764624   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:15.764741   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-401500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0401 11:24:19.532643   12872 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-401500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 11:24:19.532643   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:19.533185   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-401500-m02 -DynamicMemoryEnabled $false
	I0401 11:24:21.858037   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:21.858037   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:21.858703   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-401500-m02 -Count 2
	I0401 11:24:24.102721   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:24.102721   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:24.102721   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-401500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\boot2docker.iso'
	I0401 11:24:26.826117   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:26.826117   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:26.826627   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-401500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\disk.vhd'
	I0401 11:24:29.583986   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:29.584213   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:29.584213   12872 main.go:141] libmachine: Starting VM...
	I0401 11:24:29.584213   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-401500-m02
	I0401 11:24:32.808475   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:32.808475   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:32.808569   12872 main.go:141] libmachine: Waiting for host to start...
	I0401 11:24:32.808739   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:35.243595   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:35.243595   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:35.244422   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:24:37.950490   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:37.950490   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:38.964066   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:41.310344   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:41.310344   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:41.311358   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:24:43.972564   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:43.972628   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:44.973234   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:47.289636   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:47.289636   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:47.290278   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:24:49.935956   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:49.935956   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:50.936667   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:53.294479   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:53.294479   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:53.295263   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:24:55.960299   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:24:55.960299   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:56.963871   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:24:59.271351   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:24:59.271651   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:24:59.271651   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:02.018689   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:02.018749   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:02.018865   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:04.286663   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:04.286663   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:04.286663   12872 machine.go:94] provisionDockerMachine start ...
	I0401 11:25:04.286663   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:06.582664   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:06.583149   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:06.583250   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:09.392036   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:09.392036   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:09.397400   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:09.410555   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:09.410555   12872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:25:09.534127   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 11:25:09.534229   12872 buildroot.go:166] provisioning hostname "ha-401500-m02"
	I0401 11:25:09.534322   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:11.828444   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:11.828444   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:11.829420   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:14.610883   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:14.610883   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:14.617059   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:14.617178   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:14.617178   12872 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401500-m02 && echo "ha-401500-m02" | sudo tee /etc/hostname
	I0401 11:25:14.767974   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401500-m02
	
	I0401 11:25:14.768258   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:17.045385   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:17.045471   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:17.045471   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:19.760122   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:19.760122   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:19.766096   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:19.766692   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:19.766806   12872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:25:19.914653   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:25:19.914653   12872 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:25:19.914653   12872 buildroot.go:174] setting up certificates
	I0401 11:25:19.914653   12872 provision.go:84] configureAuth start
	I0401 11:25:19.914653   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:22.199218   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:22.199218   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:22.199515   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:24.943558   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:24.944664   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:24.944736   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:27.231270   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:27.231270   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:27.231270   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:29.910462   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:29.910533   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:29.910533   12872 provision.go:143] copyHostCerts
	I0401 11:25:29.910533   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 11:25:29.911064   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:25:29.911147   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:25:29.911679   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:25:29.912557   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 11:25:29.913507   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:25:29.913507   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:25:29.913507   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:25:29.915031   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 11:25:29.915890   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:25:29.915890   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:25:29.915890   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:25:29.917398   12872 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-401500-m02 san=[127.0.0.1 172.19.149.50 ha-401500-m02 localhost minikube]
	I0401 11:25:30.020874   12872 provision.go:177] copyRemoteCerts
	I0401 11:25:30.041473   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:25:30.041473   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:32.277303   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:32.277303   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:32.277986   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:35.009020   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:35.009020   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:35.009905   12872 sshutil.go:53] new ssh client: &{IP:172.19.149.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\id_rsa Username:docker}
	I0401 11:25:35.122752   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0812441s)
	I0401 11:25:35.122752   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 11:25:35.122752   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 11:25:35.182764   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 11:25:35.183310   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 11:25:35.241412   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 11:25:35.242196   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:25:35.292811   12872 provision.go:87] duration metric: took 15.3780499s to configureAuth
	I0401 11:25:35.292876   12872 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:25:35.293462   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:25:35.293462   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:37.548794   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:37.548794   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:37.549234   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:40.307749   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:40.307921   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:40.318205   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:40.319111   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:40.319111   12872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:25:40.444042   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:25:40.444099   12872 buildroot.go:70] root file system type: tmpfs
	I0401 11:25:40.444386   12872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:25:40.444386   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:42.703063   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:42.703548   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:42.703661   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:45.393155   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:45.393155   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:45.402922   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:45.402922   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:45.403706   12872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.153.73"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:25:45.570984   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.153.73
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:25:45.571051   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:47.872077   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:47.872077   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:47.872378   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:50.572850   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:50.572850   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:50.581231   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:25:50.582153   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:25:50.582153   12872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:25:52.814142   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 11:25:52.814142   12872 machine.go:97] duration metric: took 48.5271376s to provisionDockerMachine
	I0401 11:25:52.814142   12872 client.go:171] duration metric: took 2m1.6467552s to LocalClient.Create
	I0401 11:25:52.814142   12872 start.go:167] duration metric: took 2m1.6467552s to libmachine.API.Create "ha-401500"
	I0401 11:25:52.814142   12872 start.go:293] postStartSetup for "ha-401500-m02" (driver="hyperv")
	I0401 11:25:52.814142   12872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:25:52.826787   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:25:52.826787   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:25:55.098120   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:25:55.098120   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:55.098390   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:25:57.773663   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:25:57.773663   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:25:57.775050   12872 sshutil.go:53] new ssh client: &{IP:172.19.149.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\id_rsa Username:docker}
	I0401 11:25:57.883359   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0565359s)
	I0401 11:25:57.896080   12872 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:25:57.903326   12872 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:25:57.903326   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:25:57.903846   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:25:57.904815   12872 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:25:57.904815   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 11:25:57.916785   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 11:25:57.936975   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:25:57.983525   12872 start.go:296] duration metric: took 5.1693458s for postStartSetup
	I0401 11:25:57.987185   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:00.206992   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:00.206992   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:00.207145   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:02.897814   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:02.897814   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:02.898907   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:26:02.901729   12872 start.go:128] duration metric: took 2m11.738322s to createHost
	I0401 11:26:02.901729   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:05.118268   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:05.118495   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:05.118495   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:07.852840   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:07.853199   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:07.859062   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:26:07.859062   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:26:07.859062   12872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:26:07.986939   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711970767.979901784
	
	I0401 11:26:07.986939   12872 fix.go:216] guest clock: 1711970767.979901784
	I0401 11:26:07.986939   12872 fix.go:229] Guest: 2024-04-01 11:26:07.979901784 +0000 UTC Remote: 2024-04-01 11:26:02.9017293 +0000 UTC m=+353.133323501 (delta=5.078172484s)
	I0401 11:26:07.986939   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:10.254232   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:10.254459   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:10.254459   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:12.988474   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:12.988474   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:12.994969   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:26:12.995906   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.149.50 22 <nil> <nil>}
	I0401 11:26:12.995906   12872 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711970767
	I0401 11:26:13.147265   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:26:07 UTC 2024
	
	I0401 11:26:13.147447   12872 fix.go:236] clock set: Mon Apr  1 11:26:07 UTC 2024
	 (err=<nil>)
	I0401 11:26:13.147447   12872 start.go:83] releasing machines lock for "ha-401500-m02", held for 2m21.984958s
	I0401 11:26:13.147730   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:15.400571   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:15.400571   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:15.400832   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:18.091153   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:18.091153   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:18.094043   12872 out.go:177] * Found network options:
	I0401 11:26:18.096873   12872 out.go:177]   - NO_PROXY=172.19.153.73
	W0401 11:26:18.099078   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 11:26:18.101655   12872 out.go:177]   - NO_PROXY=172.19.153.73
	W0401 11:26:18.104059   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:26:18.106039   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 11:26:18.108567   12872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:26:18.108567   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:18.119016   12872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 11:26:18.119016   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m02 ).state
	I0401 11:26:20.431266   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:20.431379   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:20.431266   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:20.431379   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:20.431379   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:20.431505   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:23.226507   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:23.227048   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:23.227614   12872 sshutil.go:53] new ssh client: &{IP:172.19.149.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\id_rsa Username:docker}
	I0401 11:26:23.252104   12872 main.go:141] libmachine: [stdout =====>] : 172.19.149.50
	
	I0401 11:26:23.252104   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:23.252755   12872 sshutil.go:53] new ssh client: &{IP:172.19.149.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m02\id_rsa Username:docker}
	I0401 11:26:23.416258   12872 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2963791s)
	W0401 11:26:23.416258   12872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:26:23.416391   12872 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3077864s)
	I0401 11:26:23.429709   12872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:26:23.464692   12872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 11:26:23.464772   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:26:23.464926   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:26:23.515323   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:26:23.550076   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:26:23.572889   12872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:26:23.586640   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:26:23.622061   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:26:23.656922   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:26:23.689028   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:26:23.727818   12872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:26:23.765687   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:26:23.799951   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:26:23.835474   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:26:23.868237   12872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:26:23.900104   12872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:26:23.931666   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:24.146851   12872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:26:24.192884   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:26:24.206348   12872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:26:24.246889   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:26:24.284058   12872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:26:24.328234   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:26:24.367282   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:26:24.404276   12872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 11:26:24.468484   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:26:24.495640   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:26:24.549396   12872 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:26:24.570343   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:26:24.595364   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:26:24.641753   12872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:26:24.857265   12872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:26:25.071243   12872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:26:25.071243   12872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:26:25.120353   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:25.332761   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:26:27.923293   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899259s)
	I0401 11:26:27.935255   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 11:26:27.971850   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:26:28.013726   12872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 11:26:28.234460   12872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 11:26:28.444688   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:28.664253   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 11:26:28.708931   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:26:28.747517   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:28.968350   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 11:26:29.082405   12872 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 11:26:29.095396   12872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 11:26:29.104391   12872 start.go:562] Will wait 60s for crictl version
	I0401 11:26:29.116369   12872 ssh_runner.go:195] Run: which crictl
	I0401 11:26:29.134931   12872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 11:26:29.219441   12872 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 11:26:29.228704   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:26:29.273858   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:26:29.312134   12872 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 11:26:29.314717   12872 out.go:177]   - env NO_PROXY=172.19.153.73
	I0401 11:26:29.318763   12872 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 11:26:29.322725   12872 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 11:26:29.322725   12872 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 11:26:29.322725   12872 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 11:26:29.322725   12872 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 11:26:29.325764   12872 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 11:26:29.325764   12872 ip.go:210] interface addr: 172.19.144.1/20
	I0401 11:26:29.337759   12872 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 11:26:29.343342   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:26:29.368898   12872 mustload.go:65] Loading cluster: ha-401500
	I0401 11:26:29.369082   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:26:29.370062   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:26:31.547016   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:31.547526   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:31.547526   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:26:31.548432   12872 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500 for IP: 172.19.149.50
	I0401 11:26:31.548505   12872 certs.go:194] generating shared ca certs ...
	I0401 11:26:31.548505   12872 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:26:31.549160   12872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 11:26:31.549499   12872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 11:26:31.549653   12872 certs.go:256] generating profile certs ...
	I0401 11:26:31.550257   12872 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key
	I0401 11:26:31.550438   12872 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.0cad9fc6
	I0401 11:26:31.550569   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.0cad9fc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.153.73 172.19.149.50 172.19.159.254]
	I0401 11:26:31.955806   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.0cad9fc6 ...
	I0401 11:26:31.955806   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.0cad9fc6: {Name:mkcf0f68864f471e42f9c64286a52246005b41fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:26:31.956879   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.0cad9fc6 ...
	I0401 11:26:31.956879   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.0cad9fc6: {Name:mkf6efe69cff6bca356149ae606453d01bea64f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:26:31.958097   12872 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.0cad9fc6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt
	I0401 11:26:31.971020   12872 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.0cad9fc6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key
	I0401 11:26:31.973590   12872 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key
	I0401 11:26:31.973671   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 11:26:31.973737   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0401 11:26:31.973737   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 11:26:31.973737   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 11:26:31.974269   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 11:26:31.974402   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 11:26:31.974498   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 11:26:31.974663   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 11:26:31.974663   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem (1338 bytes)
	W0401 11:26:31.975240   12872 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260_empty.pem, impossibly tiny 0 bytes
	I0401 11:26:31.975437   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 11:26:31.975779   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 11:26:31.976042   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 11:26:31.976042   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 11:26:31.976843   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem (1708 bytes)
	I0401 11:26:31.976996   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem -> /usr/share/ca-certificates/1260.pem
	I0401 11:26:31.976996   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /usr/share/ca-certificates/12602.pem
	I0401 11:26:31.976996   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:26:31.977624   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:26:34.285235   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:34.285310   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:34.285310   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:37.089075   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:26:37.089134   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:37.089134   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:26:37.195542   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0401 11:26:37.203253   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0401 11:26:37.237772   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0401 11:26:37.245497   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0401 11:26:37.279217   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0401 11:26:37.287216   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0401 11:26:37.320170   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0401 11:26:37.326555   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0401 11:26:37.361449   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0401 11:26:37.369997   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0401 11:26:37.406102   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0401 11:26:37.413141   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0401 11:26:37.433969   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 11:26:37.489020   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 11:26:37.543100   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 11:26:37.595921   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 11:26:37.647343   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0401 11:26:37.700082   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 11:26:37.763314   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 11:26:37.823076   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 11:26:37.875767   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem --> /usr/share/ca-certificates/1260.pem (1338 bytes)
	I0401 11:26:37.930567   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /usr/share/ca-certificates/12602.pem (1708 bytes)
	I0401 11:26:37.980320   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 11:26:38.030123   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0401 11:26:38.067802   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0401 11:26:38.103871   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0401 11:26:38.139269   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0401 11:26:38.174475   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0401 11:26:38.209026   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0401 11:26:38.244216   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0401 11:26:38.290178   12872 ssh_runner.go:195] Run: openssl version
	I0401 11:26:38.314297   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1260.pem && ln -fs /usr/share/ca-certificates/1260.pem /etc/ssl/certs/1260.pem"
	I0401 11:26:38.348513   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1260.pem
	I0401 11:26:38.357217   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 11:26:38.371159   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1260.pem
	I0401 11:26:38.393385   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1260.pem /etc/ssl/certs/51391683.0"
	I0401 11:26:38.432517   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0401 11:26:38.467822   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0401 11:26:38.474975   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 11:26:38.487977   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0401 11:26:38.512119   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 11:26:38.546415   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 11:26:38.584732   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:26:38.593856   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:26:38.606932   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:26:38.630222   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 11:26:38.667692   12872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 11:26:38.677524   12872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 11:26:38.678146   12872 kubeadm.go:928] updating node {m02 172.19.149.50 8443 v1.29.3 docker true true} ...
	I0401 11:26:38.678146   12872 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-401500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.149.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 11:26:38.678146   12872 kube-vip.go:111] generating kube-vip config ...
	I0401 11:26:38.691285   12872 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 11:26:38.717655   12872 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 11:26:38.717655   12872 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 11:26:38.731406   12872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 11:26:38.750144   12872 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0401 11:26:38.765991   12872 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0401 11:26:38.790074   12872 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet
	I0401 11:26:38.790319   12872 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl
	I0401 11:26:38.790463   12872 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm
	I0401 11:26:39.766323   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 11:26:39.778138   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 11:26:39.780131   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 11:26:39.789436   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0401 11:26:39.790413   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0401 11:26:39.798402   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 11:26:39.869438   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0401 11:26:39.869438   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0401 11:26:40.376909   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 11:26:40.422914   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 11:26:40.434500   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 11:26:40.463499   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0401 11:26:40.463499   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0401 11:26:41.216486   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0401 11:26:41.242631   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 11:26:41.285210   12872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 11:26:41.320128   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 11:26:41.367442   12872 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0401 11:26:41.375281   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:26:41.413219   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:26:41.641911   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:26:41.674467   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:26:41.675295   12872 start.go:316] joinCluster: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:26:41.675531   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0401 11:26:41.675643   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:26:43.930948   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:26:43.930948   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:43.931067   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:26:46.675502   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:26:46.675590   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:26:46.676111   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:26:46.910230   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2346615s)
	I0401 11:26:46.910304   12872 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:26:46.910410   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lfqzir.q7dxua6s02mjgst6 --discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-401500-m02 --control-plane --apiserver-advertise-address=172.19.149.50 --apiserver-bind-port=8443"
	I0401 11:27:35.007302   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lfqzir.q7dxua6s02mjgst6 --discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-401500-m02 --control-plane --apiserver-advertise-address=172.19.149.50 --apiserver-bind-port=8443": (48.0965016s)
	I0401 11:27:35.007482   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0401 11:27:36.031458   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.0229296s)
	I0401 11:27:36.048246   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-401500-m02 minikube.k8s.io/updated_at=2024_04_01T11_27_36_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=ha-401500 minikube.k8s.io/primary=false
	I0401 11:27:36.251226   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-401500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0401 11:27:36.419429   12872 start.go:318] duration metric: took 54.7438095s to joinCluster
	I0401 11:27:36.419869   12872 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:27:36.424647   12872 out.go:177] * Verifying Kubernetes components...
	I0401 11:27:36.420742   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:27:36.440084   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:27:36.890925   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:27:36.938898   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:27:36.939462   12872 kapi.go:59] client config for ha-401500: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x236fd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0401 11:27:36.939770   12872 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.159.254:8443 with https://172.19.153.73:8443
	I0401 11:27:36.940555   12872 node_ready.go:35] waiting up to 6m0s for node "ha-401500-m02" to be "Ready" ...
	I0401 11:27:36.940755   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:36.940755   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:36.940755   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:36.940755   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:36.962308   12872 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0401 11:27:37.442668   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:37.442668   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:37.442668   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:37.442668   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:37.449266   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:37.952975   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:37.953010   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:37.953065   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:37.953065   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:37.959780   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:38.445133   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:38.445133   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:38.445133   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:38.445133   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:38.450629   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:38.950450   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:38.950704   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:38.950704   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:38.950704   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:38.954828   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:38.956527   12872 node_ready.go:53] node "ha-401500-m02" has status "Ready":"False"
	I0401 11:27:39.442450   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:39.442512   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:39.442512   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:39.442512   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:39.451276   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:39.944649   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:39.944712   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:39.944746   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:39.944746   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:39.949086   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:40.452147   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:40.452254   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:40.452254   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:40.452254   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:40.462365   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:27:40.942213   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:40.942213   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:40.942302   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:40.942322   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:41.086208   12872 round_trippers.go:574] Response Status: 200 OK in 143 milliseconds
	I0401 11:27:41.087261   12872 node_ready.go:53] node "ha-401500-m02" has status "Ready":"False"
	I0401 11:27:41.448193   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:41.448193   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:41.448193   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:41.448193   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:41.452823   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:41.952994   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:41.952994   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:41.952994   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:41.952994   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:41.957756   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:42.444599   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:42.444599   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:42.444599   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:42.444599   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:42.459339   12872 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0401 11:27:42.950589   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:42.950589   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:42.950670   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:42.950670   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:42.956599   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:43.454422   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:43.454508   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:43.454508   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:43.454508   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:43.459854   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:43.461387   12872 node_ready.go:53] node "ha-401500-m02" has status "Ready":"False"
	I0401 11:27:43.944550   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:43.944550   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:43.944792   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:43.944792   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:43.953170   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:44.444380   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:44.444522   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.444522   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.444522   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.449834   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:44.451252   12872 node_ready.go:49] node "ha-401500-m02" has status "Ready":"True"
	I0401 11:27:44.451252   12872 node_ready.go:38] duration metric: took 7.5104442s for node "ha-401500-m02" to be "Ready" ...
	I0401 11:27:44.451388   12872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:27:44.451530   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:44.451530   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.451530   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.451530   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.465553   12872 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0401 11:27:44.474721   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4xvlf" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.474721   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4xvlf
	I0401 11:27:44.474721   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.474721   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.474721   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.480922   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:44.481478   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:44.482084   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.482084   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.482084   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.486298   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:44.486849   12872 pod_ready.go:92] pod "coredns-76f75df574-4xvlf" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:44.486849   12872 pod_ready.go:81] duration metric: took 12.1271ms for pod "coredns-76f75df574-4xvlf" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.486849   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vjslq" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.487386   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vjslq
	I0401 11:27:44.487386   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.487386   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.487449   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.491143   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:27:44.492219   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:44.492295   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.492295   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.492295   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.497127   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:44.498178   12872 pod_ready.go:92] pod "coredns-76f75df574-vjslq" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:44.498178   12872 pod_ready.go:81] duration metric: took 11.3293ms for pod "coredns-76f75df574-vjslq" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.498178   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.498178   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500
	I0401 11:27:44.498178   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.498178   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.498178   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.502774   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:44.503766   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:44.503766   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.503766   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.503766   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.509771   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:44.510856   12872 pod_ready.go:92] pod "etcd-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:44.510856   12872 pod_ready.go:81] duration metric: took 12.6778ms for pod "etcd-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.510856   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:44.510856   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:44.510856   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.510856   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.510856   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.514665   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:27:44.515984   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:44.515984   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:44.515984   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:44.515984   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:44.520576   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:45.023068   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:45.023068   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:45.023068   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:45.023068   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:45.027637   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:45.028972   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:45.028972   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:45.028972   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:45.028972   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:45.033457   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:45.519479   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:45.519479   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:45.519479   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:45.519797   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:45.527332   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:27:45.527602   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:45.527602   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:45.528183   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:45.528183   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:45.532214   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:46.022058   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:46.022058   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.022058   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.022058   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.027868   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:46.029105   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:46.029182   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.029277   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.029277   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.037606   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:46.524102   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:27:46.524279   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.524279   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.524279   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.530266   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:46.533126   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:46.533206   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.533206   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.533292   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.552770   12872 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0401 11:27:46.554251   12872 pod_ready.go:92] pod "etcd-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:46.554339   12872 pod_ready.go:81] duration metric: took 2.0434691s for pod "etcd-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:46.554446   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:46.554582   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500
	I0401 11:27:46.554582   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.554582   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.554582   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.562948   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:46.564016   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:46.564558   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.564558   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.564612   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.569625   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:46.570609   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:46.570609   12872 pod_ready.go:81] duration metric: took 16.1629ms for pod "kube-apiserver-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:46.570609   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:46.648210   12872 request.go:629] Waited for 76.9405ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:27:46.648210   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:27:46.648210   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.648210   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.648210   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.653786   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:46.851473   12872 request.go:629] Waited for 195.9621ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:46.851817   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:46.851817   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:46.851817   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:46.851817   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:46.857564   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:47.086526   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:27:47.086650   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.086650   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.086650   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.092544   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:47.259722   12872 request.go:629] Waited for 166.2169ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:47.259722   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:47.259722   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.259722   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.259722   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.265543   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:47.267070   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:47.267233   12872 pod_ready.go:81] duration metric: took 696.6198ms for pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:47.267233   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:47.446131   12872 request.go:629] Waited for 178.7861ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500
	I0401 11:27:47.446475   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500
	I0401 11:27:47.446475   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.446475   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.446475   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.452527   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:47.648710   12872 request.go:629] Waited for 194.5476ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:47.648710   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:47.648710   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.648710   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.648710   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.654327   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:47.656224   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:47.656358   12872 pod_ready.go:81] duration metric: took 389.1221ms for pod "kube-controller-manager-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:47.656358   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:47.852951   12872 request.go:629] Waited for 196.5913ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m02
	I0401 11:27:47.852951   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m02
	I0401 11:27:47.852951   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:47.852951   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:47.852951   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:47.858663   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:48.057777   12872 request.go:629] Waited for 197.3655ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:48.058010   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:48.058010   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.058081   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.058081   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.064653   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:48.065237   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:48.065295   12872 pod_ready.go:81] duration metric: took 408.9341ms for pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.065295   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28zds" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.245186   12872 request.go:629] Waited for 179.7853ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zds
	I0401 11:27:48.245186   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zds
	I0401 11:27:48.245186   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.245186   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.245186   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.250210   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:48.450759   12872 request.go:629] Waited for 198.752ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:48.450860   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:48.450860   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.450860   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.450949   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.456305   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:27:48.457207   12872 pod_ready.go:92] pod "kube-proxy-28zds" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:48.457276   12872 pod_ready.go:81] duration metric: took 391.9779ms for pod "kube-proxy-28zds" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.457331   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqcpv" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.656255   12872 request.go:629] Waited for 198.8543ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqcpv
	I0401 11:27:48.656255   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqcpv
	I0401 11:27:48.656255   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.656255   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.656255   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.662824   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:48.844811   12872 request.go:629] Waited for 179.7143ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:48.845046   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:48.845046   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:48.845128   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:48.845128   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:48.851746   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:48.852626   12872 pod_ready.go:92] pod "kube-proxy-hqcpv" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:48.852661   12872 pod_ready.go:81] duration metric: took 395.2927ms for pod "kube-proxy-hqcpv" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:48.852687   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:49.046849   12872 request.go:629] Waited for 194.1607ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500
	I0401 11:27:49.047019   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500
	I0401 11:27:49.047019   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.047019   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.047019   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.053647   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:49.249201   12872 request.go:629] Waited for 193.3384ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:49.249201   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:27:49.249476   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.249476   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.249476   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.255856   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:49.257030   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:49.257030   12872 pod_ready.go:81] duration metric: took 404.3396ms for pod "kube-scheduler-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:49.257030   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:49.452619   12872 request.go:629] Waited for 195.4041ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m02
	I0401 11:27:49.452723   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m02
	I0401 11:27:49.452723   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.452723   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.452723   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.460845   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:27:49.656358   12872 request.go:629] Waited for 194.4507ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:49.656358   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:27:49.656755   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.656755   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.656755   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.663121   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:27:49.664267   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:27:49.664429   12872 pod_ready.go:81] duration metric: took 407.3962ms for pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:27:49.664429   12872 pod_ready.go:38] duration metric: took 5.2130044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:27:49.664544   12872 api_server.go:52] waiting for apiserver process to appear ...
	I0401 11:27:49.678600   12872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 11:27:49.707910   12872 api_server.go:72] duration metric: took 13.2878831s to wait for apiserver process to appear ...
	I0401 11:27:49.708743   12872 api_server.go:88] waiting for apiserver healthz status ...
	I0401 11:27:49.708743   12872 api_server.go:253] Checking apiserver healthz at https://172.19.153.73:8443/healthz ...
	I0401 11:27:49.716486   12872 api_server.go:279] https://172.19.153.73:8443/healthz returned 200:
	ok
	I0401 11:27:49.716754   12872 round_trippers.go:463] GET https://172.19.153.73:8443/version
	I0401 11:27:49.716771   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.716820   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.716838   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.718603   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 11:27:49.718734   12872 api_server.go:141] control plane version: v1.29.3
	I0401 11:27:49.718853   12872 api_server.go:131] duration metric: took 10.1105ms to wait for apiserver health ...
	I0401 11:27:49.718853   12872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 11:27:49.858508   12872 request.go:629] Waited for 139.6231ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:49.858611   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:49.858611   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:49.858729   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:49.858729   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:49.867821   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:27:49.875964   12872 system_pods.go:59] 17 kube-system pods found
	I0401 11:27:49.875964   12872 system_pods.go:61] "coredns-76f75df574-4xvlf" [d2a6344b-f0f6-49a1-9135-2a2ae21228b9] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "coredns-76f75df574-vjslq" [81ef7e9b-acf1-411f-8f00-bb9fea08056f] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "etcd-ha-401500" [532eef29-0a6a-4b38-82a7-522c28eb8d64] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "etcd-ha-401500-m02" [258b489e-95c8-4bfc-931f-2392bd619257] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kindnet-92s2r" [5d6301b7-cb61-401f-9b6d-1a77775b65ac] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kindnet-v22wx" [86d50e2c-cb46-475b-9ec9-e16549903f65] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-apiserver-ha-401500" [bd79feb9-6db9-49ab-87ec-debf9556277f] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-apiserver-ha-401500-m02" [c092dcfe-f711-419d-b172-05670e1c4b53] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-controller-manager-ha-401500" [aa7dc05b-ee68-49fa-9a08-60e079f62848] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-controller-manager-ha-401500-m02" [2755a2be-c5d2-4df7-9572-f2bde8aa9314] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-proxy-28zds" [bb38f484-6c10-4874-a3a7-dba22c1720a0] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-proxy-hqcpv" [edf6bd75-05e1-479f-b190-13d867bb7ef5] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-scheduler-ha-401500" [d727c9ec-579a-4449-90b1-86b790573abb] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-scheduler-ha-401500-m02" [b38ecb47-0b33-4432-a060-67e352fc9d73] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-vip-ha-401500" [b1386d4f-d6ab-4cfd-91e4-39539d0e2854] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "kube-vip-ha-401500-m02" [d5cc5b36-52ad-4da8-b75a-8cfce3b3391f] Running
	I0401 11:27:49.875964   12872 system_pods.go:61] "storage-provisioner" [373b3186-34e3-4ae2-8ddf-4701d665e768] Running
	I0401 11:27:49.875964   12872 system_pods.go:74] duration metric: took 157.0791ms to wait for pod list to return data ...
	I0401 11:27:49.875964   12872 default_sa.go:34] waiting for default service account to be created ...
	I0401 11:27:50.060269   12872 request.go:629] Waited for 184.3038ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/default/serviceaccounts
	I0401 11:27:50.060269   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/default/serviceaccounts
	I0401 11:27:50.060269   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:50.060269   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:50.060269   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:50.064649   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:50.065492   12872 default_sa.go:45] found service account: "default"
	I0401 11:27:50.065492   12872 default_sa.go:55] duration metric: took 189.5266ms for default service account to be created ...
	I0401 11:27:50.065492   12872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 11:27:50.248980   12872 request.go:629] Waited for 183.2936ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:50.249060   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:27:50.249060   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:50.249060   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:50.249060   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:50.258369   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:27:50.266999   12872 system_pods.go:86] 17 kube-system pods found
	I0401 11:27:50.266999   12872 system_pods.go:89] "coredns-76f75df574-4xvlf" [d2a6344b-f0f6-49a1-9135-2a2ae21228b9] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "coredns-76f75df574-vjslq" [81ef7e9b-acf1-411f-8f00-bb9fea08056f] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "etcd-ha-401500" [532eef29-0a6a-4b38-82a7-522c28eb8d64] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "etcd-ha-401500-m02" [258b489e-95c8-4bfc-931f-2392bd619257] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kindnet-92s2r" [5d6301b7-cb61-401f-9b6d-1a77775b65ac] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kindnet-v22wx" [86d50e2c-cb46-475b-9ec9-e16549903f65] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-apiserver-ha-401500" [bd79feb9-6db9-49ab-87ec-debf9556277f] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-apiserver-ha-401500-m02" [c092dcfe-f711-419d-b172-05670e1c4b53] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-controller-manager-ha-401500" [aa7dc05b-ee68-49fa-9a08-60e079f62848] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-controller-manager-ha-401500-m02" [2755a2be-c5d2-4df7-9572-f2bde8aa9314] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-proxy-28zds" [bb38f484-6c10-4874-a3a7-dba22c1720a0] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-proxy-hqcpv" [edf6bd75-05e1-479f-b190-13d867bb7ef5] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-scheduler-ha-401500" [d727c9ec-579a-4449-90b1-86b790573abb] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-scheduler-ha-401500-m02" [b38ecb47-0b33-4432-a060-67e352fc9d73] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-vip-ha-401500" [b1386d4f-d6ab-4cfd-91e4-39539d0e2854] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "kube-vip-ha-401500-m02" [d5cc5b36-52ad-4da8-b75a-8cfce3b3391f] Running
	I0401 11:27:50.266999   12872 system_pods.go:89] "storage-provisioner" [373b3186-34e3-4ae2-8ddf-4701d665e768] Running
	I0401 11:27:50.266999   12872 system_pods.go:126] duration metric: took 201.5058ms to wait for k8s-apps to be running ...
	I0401 11:27:50.267527   12872 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 11:27:50.278810   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 11:27:50.308270   12872 system_svc.go:56] duration metric: took 40.7423ms WaitForService to wait for kubelet
	I0401 11:27:50.308360   12872 kubeadm.go:576] duration metric: took 13.8883284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:27:50.308360   12872 node_conditions.go:102] verifying NodePressure condition ...
	I0401 11:27:50.454420   12872 request.go:629] Waited for 145.8884ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes
	I0401 11:27:50.454790   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes
	I0401 11:27:50.454790   12872 round_trippers.go:469] Request Headers:
	I0401 11:27:50.454790   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:27:50.454867   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:27:50.459106   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:27:50.461264   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:27:50.461353   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:27:50.461353   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:27:50.461353   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:27:50.461401   12872 node_conditions.go:105] duration metric: took 152.9923ms to run NodePressure ...
	I0401 11:27:50.461401   12872 start.go:240] waiting for startup goroutines ...
	I0401 11:27:50.461447   12872 start.go:254] writing updated cluster config ...
	I0401 11:27:50.464624   12872 out.go:177] 
	I0401 11:27:50.479371   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:27:50.479371   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:27:50.486271   12872 out.go:177] * Starting "ha-401500-m03" control-plane node in "ha-401500" cluster
	I0401 11:27:50.489441   12872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 11:27:50.489441   12872 cache.go:56] Caching tarball of preloaded images
	I0401 11:27:50.490136   12872 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 11:27:50.490314   12872 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 11:27:50.490480   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:27:50.495394   12872 start.go:360] acquireMachinesLock for ha-401500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 11:27:50.495394   12872 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-401500-m03"
	I0401 11:27:50.496054   12872 start.go:93] Provisioning new machine with config: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:27:50.496089   12872 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0401 11:27:50.497946   12872 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 11:27:50.498892   12872 start.go:159] libmachine.API.Create for "ha-401500" (driver="hyperv")
	I0401 11:27:50.498892   12872 client.go:168] LocalClient.Create starting
	I0401 11:27:50.498892   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 11:27:50.498892   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:27:50.498892   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:27:50.498892   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 11:27:50.499900   12872 main.go:141] libmachine: Decoding PEM data...
	I0401 11:27:50.499900   12872 main.go:141] libmachine: Parsing certificate...
	I0401 11:27:50.499900   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 11:27:52.547405   12872 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 11:27:52.548425   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:27:52.548533   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 11:27:54.399368   12872 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 11:27:54.399368   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:27:54.400111   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:27:56.009894   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:27:56.010495   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:27:56.010573   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:27:59.964174   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:27:59.967124   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:27:59.968808   12872 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 11:28:00.481020   12872 main.go:141] libmachine: Creating SSH key...
	I0401 11:28:00.705339   12872 main.go:141] libmachine: Creating VM...
	I0401 11:28:00.705339   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 11:28:03.773115   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 11:28:03.773191   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:03.773294   12872 main.go:141] libmachine: Using switch "Default Switch"
	I0401 11:28:03.773362   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 11:28:05.656597   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 11:28:05.656597   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:05.657417   12872 main.go:141] libmachine: Creating VHD
	I0401 11:28:05.657417   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 11:28:09.602344   12872 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 59C07E18-6B93-4D43-AE0D-B8080CD51ED7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 11:28:09.603388   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:09.603388   12872 main.go:141] libmachine: Writing magic tar header
	I0401 11:28:09.603388   12872 main.go:141] libmachine: Writing SSH key tar header
	I0401 11:28:09.612931   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 11:28:12.924598   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:12.925654   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:12.925654   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\disk.vhd' -SizeBytes 20000MB
	I0401 11:28:15.564225   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:15.564490   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:15.564490   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-401500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0401 11:28:19.404527   12872 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-401500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 11:28:19.404527   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:19.405133   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-401500-m03 -DynamicMemoryEnabled $false
	I0401 11:28:21.797156   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:21.797156   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:21.797276   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-401500-m03 -Count 2
	I0401 11:28:24.099764   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:24.099764   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:24.099983   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-401500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\boot2docker.iso'
	I0401 11:28:26.857443   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:26.857443   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:26.857580   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-401500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\disk.vhd'
	I0401 11:28:29.666017   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:29.666229   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:29.666229   12872 main.go:141] libmachine: Starting VM...
	I0401 11:28:29.666309   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-401500-m03
	I0401 11:28:32.861287   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:32.861287   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:32.861287   12872 main.go:141] libmachine: Waiting for host to start...
	I0401 11:28:32.861287   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:35.283506   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:35.283506   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:35.284314   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:28:38.001661   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:38.001661   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:39.007282   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:41.377907   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:41.377907   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:41.378002   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:28:44.059304   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:44.060227   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:45.074063   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:47.418783   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:47.418783   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:47.418783   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:28:50.141252   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:50.141308   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:51.151912   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:53.476765   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:53.476765   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:53.476765   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:28:56.144640   12872 main.go:141] libmachine: [stdout =====>] : 
	I0401 11:28:56.144790   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:57.149948   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:28:59.497068   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:28:59.497169   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:28:59.497169   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:02.257353   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:02.257353   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:02.257353   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:04.503678   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:04.503678   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:04.504240   12872 machine.go:94] provisionDockerMachine start ...
	I0401 11:29:04.504380   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:06.845449   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:06.845449   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:06.845449   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:09.620167   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:09.620167   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:09.627761   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:09.627761   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:09.627761   12872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:29:09.749049   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 11:29:09.749049   12872 buildroot.go:166] provisioning hostname "ha-401500-m03"
	I0401 11:29:09.749049   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:12.053314   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:12.053405   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:12.053498   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:14.777512   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:14.777745   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:14.783706   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:14.784589   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:14.784589   12872 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401500-m03 && echo "ha-401500-m03" | sudo tee /etc/hostname
	I0401 11:29:14.935641   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401500-m03
	
	I0401 11:29:14.936194   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:17.215469   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:17.215663   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:17.215754   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:19.961506   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:19.961506   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:19.967928   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:19.968662   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:19.970766   12872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:29:20.119063   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:29:20.119640   12872 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 11:29:20.119640   12872 buildroot.go:174] setting up certificates
	I0401 11:29:20.119709   12872 provision.go:84] configureAuth start
	I0401 11:29:20.119779   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:22.397848   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:22.398077   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:22.398142   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:25.142348   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:25.142348   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:25.143234   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:27.481424   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:27.481881   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:27.482114   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:30.257381   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:30.257381   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:30.257457   12872 provision.go:143] copyHostCerts
	I0401 11:29:30.257634   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 11:29:30.257830   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 11:29:30.257830   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 11:29:30.257984   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 11:29:30.259695   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 11:29:30.259751   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 11:29:30.259751   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 11:29:30.260302   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 11:29:30.261251   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 11:29:30.261251   12872 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 11:29:30.261251   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 11:29:30.261848   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 11:29:30.262994   12872 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-401500-m03 san=[127.0.0.1 172.19.145.208 ha-401500-m03 localhost minikube]
	I0401 11:29:30.435832   12872 provision.go:177] copyRemoteCerts
	I0401 11:29:30.446823   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:29:30.446823   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:32.770358   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:32.771357   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:32.771465   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:35.537563   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:35.537563   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:35.538822   12872 sshutil.go:53] new ssh client: &{IP:172.19.145.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\id_rsa Username:docker}
	I0401 11:29:35.655030   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.208139s)
	I0401 11:29:35.655030   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 11:29:35.655030   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 11:29:35.718584   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 11:29:35.718584   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 11:29:35.781610   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 11:29:35.783291   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 11:29:35.837915   12872 provision.go:87] duration metric: took 15.7180964s to configureAuth
	I0401 11:29:35.837915   12872 buildroot.go:189] setting minikube options for container-runtime
	I0401 11:29:35.838636   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:29:35.838636   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:38.150526   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:38.150526   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:38.150526   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:40.888275   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:40.889287   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:40.898375   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:40.898375   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:40.898375   12872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 11:29:41.032689   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 11:29:41.032689   12872 buildroot.go:70] root file system type: tmpfs
	I0401 11:29:41.032909   12872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 11:29:41.033014   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:43.337930   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:43.337930   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:43.338416   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:46.101816   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:46.101816   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:46.108345   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:46.108345   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:46.108939   12872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.153.73"
	Environment="NO_PROXY=172.19.153.73,172.19.149.50"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 11:29:46.271025   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.153.73
	Environment=NO_PROXY=172.19.153.73,172.19.149.50
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 11:29:46.271362   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:48.553586   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:48.553586   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:48.553586   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:51.307547   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:51.307862   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:51.313085   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:29:51.313834   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:29:51.314064   12872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 11:29:53.537223   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 11:29:53.537403   12872 machine.go:97] duration metric: took 49.0328192s to provisionDockerMachine
	I0401 11:29:53.537460   12872 client.go:171] duration metric: took 2m3.0377063s to LocalClient.Create
	I0401 11:29:53.537460   12872 start.go:167] duration metric: took 2m3.0377063s to libmachine.API.Create "ha-401500"
	I0401 11:29:53.537522   12872 start.go:293] postStartSetup for "ha-401500-m03" (driver="hyperv")
	I0401 11:29:53.537584   12872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:29:53.551431   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:29:53.551431   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:29:55.824297   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:29:55.824297   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:55.824714   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:29:58.563462   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:29:58.563462   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:29:58.563462   12872 sshutil.go:53] new ssh client: &{IP:172.19.145.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\id_rsa Username:docker}
	I0401 11:29:58.675603   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1241359s)
	I0401 11:29:58.688880   12872 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:29:58.699081   12872 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 11:29:58.699187   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 11:29:58.699764   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 11:29:58.700887   12872 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 11:29:58.700887   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 11:29:58.713444   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 11:29:58.735113   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 11:29:58.783152   12872 start.go:296] duration metric: took 5.2455936s for postStartSetup
	I0401 11:29:58.786301   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:01.073341   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:01.073341   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:01.073341   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:03.841016   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:03.841268   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:03.841776   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\config.json ...
	I0401 11:30:03.846438   12872 start.go:128] duration metric: took 2m13.349416s to createHost
	I0401 11:30:03.846561   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:06.163967   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:06.163967   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:06.163967   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:08.937930   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:08.937930   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:08.943770   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:30:08.945251   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:30:08.945251   12872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 11:30:09.075366   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711971009.075846302
	
	I0401 11:30:09.075366   12872 fix.go:216] guest clock: 1711971009.075846302
	I0401 11:30:09.075366   12872 fix.go:229] Guest: 2024-04-01 11:30:09.075846302 +0000 UTC Remote: 2024-04-01 11:30:03.8465619 +0000 UTC m=+594.076466301 (delta=5.229284402s)
	I0401 11:30:09.075366   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:11.360821   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:11.360919   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:11.360919   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:14.100935   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:14.100935   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:14.107770   12872 main.go:141] libmachine: Using SSH client type: native
	I0401 11:30:14.108003   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.145.208 22 <nil> <nil>}
	I0401 11:30:14.108003   12872 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711971009
	I0401 11:30:14.246088   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 11:30:09 UTC 2024
	
	I0401 11:30:14.246088   12872 fix.go:236] clock set: Mon Apr  1 11:30:09 UTC 2024
	 (err=<nil>)
	I0401 11:30:14.246088   12872 start.go:83] releasing machines lock for "ha-401500-m03", held for 2m23.749688s
	I0401 11:30:14.246088   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:16.517422   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:16.517924   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:16.518081   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:19.304823   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:19.304823   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:19.307598   12872 out.go:177] * Found network options:
	I0401 11:30:19.310427   12872 out.go:177]   - NO_PROXY=172.19.153.73,172.19.149.50
	W0401 11:30:19.312688   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:30:19.312688   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 11:30:19.315258   12872 out.go:177]   - NO_PROXY=172.19.153.73,172.19.149.50
	W0401 11:30:19.317923   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:30:19.317972   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:30:19.318630   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 11:30:19.319429   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 11:30:19.321879   12872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:30:19.321879   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:19.334359   12872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 11:30:19.334359   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500-m03 ).state
	I0401 11:30:21.689774   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:21.689774   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:21.689774   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:21.691847   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:21.691847   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:21.691847   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500-m03 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:24.523877   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:24.523877   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:24.524539   12872 sshutil.go:53] new ssh client: &{IP:172.19.145.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\id_rsa Username:docker}
	I0401 11:30:24.580027   12872 main.go:141] libmachine: [stdout =====>] : 172.19.145.208
	
	I0401 11:30:24.580234   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:24.580234   12872 sshutil.go:53] new ssh client: &{IP:172.19.145.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500-m03\id_rsa Username:docker}
	I0401 11:30:24.614348   12872 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.279952s)
	W0401 11:30:24.614348   12872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 11:30:24.628484   12872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:30:24.744975   12872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 11:30:24.744975   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:30:24.744975   12872 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4230581s)
	I0401 11:30:24.745281   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:30:24.798178   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:30:24.830185   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:30:24.850893   12872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:30:24.863487   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:30:24.897888   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:30:24.930762   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:30:24.964808   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:30:25.002124   12872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:30:25.035549   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:30:25.071946   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:30:25.110769   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:30:25.147030   12872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:30:25.180584   12872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:30:25.211723   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:25.435863   12872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:30:25.472379   12872 start.go:494] detecting cgroup driver to use...
	I0401 11:30:25.484501   12872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 11:30:25.527599   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:30:25.565685   12872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 11:30:25.612698   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 11:30:25.655318   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:30:25.696324   12872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 11:30:25.761459   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:30:25.788917   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:30:25.841605   12872 ssh_runner.go:195] Run: which cri-dockerd
	I0401 11:30:25.861586   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 11:30:25.882565   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 11:30:25.935639   12872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 11:30:26.176613   12872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 11:30:26.382512   12872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 11:30:26.382512   12872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 11:30:26.429264   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:26.662808   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 11:30:29.258871   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.595968s)
	I0401 11:30:29.270516   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 11:30:29.311357   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:30:29.353931   12872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 11:30:29.583785   12872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 11:30:29.798023   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:30.021194   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 11:30:30.065959   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 11:30:30.106615   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:30.329857   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 11:30:30.447915   12872 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 11:30:30.460899   12872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 11:30:30.471316   12872 start.go:562] Will wait 60s for crictl version
	I0401 11:30:30.484141   12872 ssh_runner.go:195] Run: which crictl
	I0401 11:30:30.504876   12872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 11:30:30.582381   12872 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 11:30:30.595632   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:30:30.644570   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 11:30:30.683526   12872 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 11:30:30.686207   12872 out.go:177]   - env NO_PROXY=172.19.153.73
	I0401 11:30:30.689225   12872 out.go:177]   - env NO_PROXY=172.19.153.73,172.19.149.50
	I0401 11:30:30.693259   12872 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 11:30:30.697916   12872 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 11:30:30.698059   12872 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 11:30:30.698059   12872 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 11:30:30.698120   12872 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 11:30:30.701194   12872 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 11:30:30.701286   12872 ip.go:210] interface addr: 172.19.144.1/20
	I0401 11:30:30.715311   12872 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 11:30:30.722311   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:30:30.749517   12872 mustload.go:65] Loading cluster: ha-401500
	I0401 11:30:30.750236   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:30:30.750296   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:30:33.029342   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:33.029342   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:33.029531   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:30:33.030406   12872 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500 for IP: 172.19.145.208
	I0401 11:30:33.030406   12872 certs.go:194] generating shared ca certs ...
	I0401 11:30:33.030406   12872 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:30:33.030782   12872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 11:30:33.031326   12872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 11:30:33.031749   12872 certs.go:256] generating profile certs ...
	I0401 11:30:33.032475   12872 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\client.key
	I0401 11:30:33.032475   12872 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.5dc6fcf3
	I0401 11:30:33.032805   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.5dc6fcf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.153.73 172.19.149.50 172.19.145.208 172.19.159.254]
	I0401 11:30:33.276382   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.5dc6fcf3 ...
	I0401 11:30:33.276382   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.5dc6fcf3: {Name:mk8c1cd265a28e5c2f46bc1d0572e38b2720cd15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:30:33.277831   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.5dc6fcf3 ...
	I0401 11:30:33.277831   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.5dc6fcf3: {Name:mk872163206b05ddb67d4c6d7376093c276d23b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:30:33.278492   12872 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt.5dc6fcf3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt
	I0401 11:30:33.291161   12872 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key.5dc6fcf3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key
	I0401 11:30:33.293324   12872 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key
	I0401 11:30:33.293385   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 11:30:33.293718   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0401 11:30:33.294067   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 11:30:33.294218   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 11:30:33.294402   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 11:30:33.294596   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 11:30:33.294799   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 11:30:33.294799   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 11:30:33.295427   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem (1338 bytes)
	W0401 11:30:33.295718   12872 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260_empty.pem, impossibly tiny 0 bytes
	I0401 11:30:33.295912   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 11:30:33.296250   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 11:30:33.296573   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 11:30:33.296675   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 11:30:33.297207   12872 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem (1708 bytes)
	I0401 11:30:33.297265   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /usr/share/ca-certificates/12602.pem
	I0401 11:30:33.297265   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:30:33.297812   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem -> /usr/share/ca-certificates/1260.pem
	I0401 11:30:33.298034   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:30:35.630288   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:35.630288   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:35.630648   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:38.386890   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:30:38.386890   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:38.388113   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:30:38.488532   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0401 11:30:38.497495   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0401 11:30:38.534315   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0401 11:30:38.542104   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0401 11:30:38.579361   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0401 11:30:38.587456   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0401 11:30:38.629590   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0401 11:30:38.637439   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0401 11:30:38.677586   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0401 11:30:38.685775   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0401 11:30:38.726786   12872 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0401 11:30:38.735391   12872 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0401 11:30:38.760875   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 11:30:38.818003   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 11:30:38.872299   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 11:30:38.924420   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 11:30:38.975369   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0401 11:30:39.029400   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 11:30:39.083007   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 11:30:39.134000   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-401500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 11:30:39.187056   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /usr/share/ca-certificates/12602.pem (1708 bytes)
	I0401 11:30:39.242386   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 11:30:39.292920   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem --> /usr/share/ca-certificates/1260.pem (1338 bytes)
	I0401 11:30:39.359184   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0401 11:30:39.396188   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0401 11:30:39.430262   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0401 11:30:39.466257   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0401 11:30:39.503180   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0401 11:30:39.537744   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0401 11:30:39.573588   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0401 11:30:39.624512   12872 ssh_runner.go:195] Run: openssl version
	I0401 11:30:39.647953   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0401 11:30:39.684138   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0401 11:30:39.692258   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 11:30:39.707273   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0401 11:30:39.730763   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 11:30:39.766586   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 11:30:39.804161   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:30:39.813131   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:30:39.831406   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:30:39.855372   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 11:30:39.887412   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1260.pem && ln -fs /usr/share/ca-certificates/1260.pem /etc/ssl/certs/1260.pem"
	I0401 11:30:39.928396   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1260.pem
	I0401 11:30:39.936549   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 11:30:39.951393   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1260.pem
	I0401 11:30:39.979352   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1260.pem /etc/ssl/certs/51391683.0"
	I0401 11:30:40.018684   12872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 11:30:40.026055   12872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 11:30:40.026402   12872 kubeadm.go:928] updating node {m03 172.19.145.208 8443 v1.29.3 docker true true} ...
	I0401 11:30:40.026574   12872 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-401500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.145.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 11:30:40.026639   12872 kube-vip.go:111] generating kube-vip config ...
	I0401 11:30:40.040791   12872 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 11:30:40.070094   12872 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 11:30:40.070094   12872 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 11:30:40.084930   12872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 11:30:40.103559   12872 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0401 11:30:40.117318   12872 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0401 11:30:40.137308   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0401 11:30:40.137308   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0401 11:30:40.137308   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0401 11:30:40.137308   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 11:30:40.137308   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 11:30:40.153000   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 11:30:40.154156   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 11:30:40.156169   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 11:30:40.177128   12872 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 11:30:40.177195   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0401 11:30:40.177195   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0401 11:30:40.177195   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0401 11:30:40.177195   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0401 11:30:40.194244   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 11:30:40.286223   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0401 11:30:40.286312   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0401 11:30:41.757209   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0401 11:30:41.777889   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0401 11:30:41.814235   12872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 11:30:41.852143   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 11:30:41.896910   12872 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0401 11:30:41.903965   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:30:41.942157   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:30:42.166468   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:30:42.207153   12872 host.go:66] Checking if "ha-401500" exists ...
	I0401 11:30:42.208101   12872 start.go:316] joinCluster: &{Name:ha-401500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-401500 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.73 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.149.50 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.19.145.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:30:42.208182   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0401 11:30:42.208379   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-401500 ).state
	I0401 11:30:44.466967   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 11:30:44.466967   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:44.467734   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-401500 ).networkadapters[0]).ipaddresses[0]
	I0401 11:30:47.245121   12872 main.go:141] libmachine: [stdout =====>] : 172.19.153.73
	
	I0401 11:30:47.245186   12872 main.go:141] libmachine: [stderr =====>] : 
	I0401 11:30:47.245445   12872 sshutil.go:53] new ssh client: &{IP:172.19.153.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-401500\id_rsa Username:docker}
	I0401 11:30:47.450439   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2422201s)
	I0401 11:30:47.450439   12872 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.19.145.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:30:47.450439   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xpgd5p.3hmotncbc7b1c956 --discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-401500-m03 --control-plane --apiserver-advertise-address=172.19.145.208 --apiserver-bind-port=8443"
	I0401 11:31:44.747700   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xpgd5p.3hmotncbc7b1c956 --discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-401500-m03 --control-plane --apiserver-advertise-address=172.19.145.208 --apiserver-bind-port=8443": (57.2968606s)
	I0401 11:31:44.747700   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0401 11:31:45.504229   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-401500-m03 minikube.k8s.io/updated_at=2024_04_01T11_31_45_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=ha-401500 minikube.k8s.io/primary=false
	I0401 11:31:45.692841   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-401500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0401 11:31:45.890412   12872 start.go:318] duration metric: took 1m3.6816429s to joinCluster
	I0401 11:31:45.890528   12872 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.19.145.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 11:31:45.891284   12872 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 11:31:45.894837   12872 out.go:177] * Verifying Kubernetes components...
	I0401 11:31:45.911938   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:31:46.283950   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:31:46.322524   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 11:31:46.323226   12872 kapi.go:59] client config for ha-401500: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-401500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x236fd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0401 11:31:46.323226   12872 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.159.254:8443 with https://172.19.153.73:8443
	I0401 11:31:46.326442   12872 node_ready.go:35] waiting up to 6m0s for node "ha-401500-m03" to be "Ready" ...
	I0401 11:31:46.326442   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:46.326442   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.326442   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.326442   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.340746   12872 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0401 11:31:46.342206   12872 node_ready.go:49] node "ha-401500-m03" has status "Ready":"True"
	I0401 11:31:46.342285   12872 node_ready.go:38] duration metric: took 15.8434ms for node "ha-401500-m03" to be "Ready" ...
	I0401 11:31:46.342285   12872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:31:46.342475   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:31:46.342501   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.342501   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.342501   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.361730   12872 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0401 11:31:46.372039   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4xvlf" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.372039   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4xvlf
	I0401 11:31:46.372039   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.372039   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.372039   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.383278   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0401 11:31:46.385316   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:31:46.385398   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.385398   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.385398   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.408286   12872 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0401 11:31:46.409401   12872 pod_ready.go:92] pod "coredns-76f75df574-4xvlf" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:46.409401   12872 pod_ready.go:81] duration metric: took 37.361ms for pod "coredns-76f75df574-4xvlf" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.409401   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vjslq" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.409924   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vjslq
	I0401 11:31:46.409924   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.409924   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.409924   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.414514   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:46.415563   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:31:46.415563   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.415563   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.415563   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.421494   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:46.422195   12872 pod_ready.go:92] pod "coredns-76f75df574-vjslq" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:46.422232   12872 pod_ready.go:81] duration metric: took 12.8315ms for pod "coredns-76f75df574-vjslq" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.422283   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.422724   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500
	I0401 11:31:46.422724   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.422724   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.422724   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.431725   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:31:46.432397   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:31:46.432397   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.432397   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.432397   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.436927   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:46.437253   12872 pod_ready.go:92] pod "etcd-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:46.437253   12872 pod_ready.go:81] duration metric: took 14.6151ms for pod "etcd-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.437253   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.437253   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m02
	I0401 11:31:46.437833   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.437877   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.437877   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.441021   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:46.442133   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:31:46.442133   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.442133   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.443339   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.447469   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:46.448876   12872 pod_ready.go:92] pod "etcd-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:46.448943   12872 pod_ready.go:81] duration metric: took 11.6891ms for pod "etcd-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.448943   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:46.528708   12872 request.go:629] Waited for 79.4213ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:46.529022   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:46.529022   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.529022   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.529126   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.537096   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:31:46.735050   12872 request.go:629] Waited for 196.9286ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:46.735265   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:46.735458   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.735458   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.735458   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.741221   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:46.957627   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:46.957708   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:46.957708   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:46.957708   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:46.962878   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:47.130326   12872 request.go:629] Waited for 165.9441ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.130426   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.130636   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.130767   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.130767   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.135889   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:47.457030   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:47.457113   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.457172   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.457172   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.462831   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:47.534173   12872 request.go:629] Waited for 69.8074ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.534364   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.534364   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.534468   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.534468   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.541019   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:47.956206   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:47.956206   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.956206   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.956206   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.961639   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:47.962989   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:47.962989   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:47.962989   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:47.962989   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:47.967334   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:48.457648   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:48.457648   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:48.457648   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:48.457648   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:48.462766   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:48.463427   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:48.463427   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:48.463427   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:48.463427   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:48.468064   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:48.469239   12872 pod_ready.go:102] pod "etcd-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:48.961515   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:48.961515   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:48.961515   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:48.961515   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:48.967037   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:48.968740   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:48.968740   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:48.968740   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:48.968740   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:48.972587   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:49.463543   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:49.463543   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:49.463543   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:49.463543   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:49.469087   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:49.470826   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:49.470908   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:49.470908   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:49.470908   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:49.474762   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:49.962298   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:49.962387   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:49.962387   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:49.962387   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:49.967802   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:49.968847   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:49.973169   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:49.973169   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:49.973169   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:49.978209   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:50.459299   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401500-m03
	I0401 11:31:50.459518   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.459601   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.459601   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.465456   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:50.466541   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:50.466644   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.466644   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.466784   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.472094   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:50.472703   12872 pod_ready.go:92] pod "etcd-ha-401500-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:50.472703   12872 pod_ready.go:81] duration metric: took 4.0237322s for pod "etcd-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.472703   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.472703   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500
	I0401 11:31:50.472703   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.472703   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.472703   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.477502   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:50.478690   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:31:50.478745   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.478745   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.478745   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.482967   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:50.485701   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:50.485733   12872 pod_ready.go:81] duration metric: took 13.0296ms for pod "kube-apiserver-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.485733   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.537774   12872 request.go:629] Waited for 51.9597ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:31:50.538010   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m02
	I0401 11:31:50.538010   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.538010   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.538210   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.556930   12872 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0401 11:31:50.741953   12872 request.go:629] Waited for 183.7751ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:31:50.742262   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:31:50.742262   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.742326   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.742326   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.748011   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:50.748773   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:31:50.748773   12872 pod_ready.go:81] duration metric: took 263.0383ms for pod "kube-apiserver-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.748773   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:31:50.931157   12872 request.go:629] Waited for 182.2207ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:50.931430   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:50.931430   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:50.931430   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:50.931536   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:50.938030   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:51.135179   12872 request.go:629] Waited for 195.9234ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.135179   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.135322   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.135322   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.135322   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.143473   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:31:51.340865   12872 request.go:629] Waited for 77.7315ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:51.341102   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:51.341146   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.341172   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.341241   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.346671   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:51.528179   12872 request.go:629] Waited for 179.5784ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.528361   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.528361   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.528361   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.528361   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.535141   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:51.764099   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:51.764160   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.764160   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.764160   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.768949   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:51.934067   12872 request.go:629] Waited for 163.2675ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.934498   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:51.934498   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:51.934498   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:51.934610   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:51.940535   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:52.263170   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:52.263170   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:52.263170   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:52.263170   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:52.279706   12872 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0401 11:31:52.340448   12872 request.go:629] Waited for 58.2764ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:52.340577   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:52.340577   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:52.340577   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:52.340641   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:52.349906   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:31:52.750925   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:52.750925   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:52.751001   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:52.751064   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:52.755449   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:52.756846   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:52.756846   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:52.756846   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:52.756846   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:52.760475   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:52.762286   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:53.260446   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:53.260446   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:53.260446   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:53.260446   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:53.266279   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:53.267730   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:53.267806   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:53.267806   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:53.267806   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:53.272139   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:53.750365   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:53.750431   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:53.750431   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:53.750431   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:53.759792   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:31:53.761492   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:53.761492   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:53.761492   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:53.761492   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:53.766346   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:54.249071   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:54.249071   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:54.249071   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:54.249071   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:54.254701   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:54.256571   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:54.256671   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:54.256671   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:54.256671   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:54.260790   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:54.753065   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:54.753172   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:54.753172   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:54.753172   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:54.758677   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:54.759905   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:54.760448   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:54.760448   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:54.760448   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:54.764677   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:31:54.765588   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:55.255504   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:55.255504   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:55.255646   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:55.255646   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:55.261034   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:55.262924   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:55.262924   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:55.262924   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:55.262924   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:55.270421   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:31:55.754536   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:55.754536   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:55.754536   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:55.754536   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:55.760864   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:55.763042   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:55.763042   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:55.763042   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:55.763042   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:55.767382   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:56.251981   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:56.252268   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:56.252268   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:56.252487   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:56.258426   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:56.259065   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:56.259065   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:56.259065   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:56.259065   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:56.268750   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:31:56.753166   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:56.753270   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:56.753270   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:56.753270   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:56.761595   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:31:56.763518   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:56.763584   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:56.763584   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:56.763584   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:56.768753   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:56.768949   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:57.264177   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:57.264281   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:57.264281   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:57.264281   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:57.270726   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:31:57.271824   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:57.271824   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:57.271824   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:57.271824   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:57.280448   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:31:57.763345   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:57.763345   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:57.763345   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:57.763345   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:57.769110   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:57.770576   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:57.770576   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:57.770576   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:57.770576   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:57.778082   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:31:58.264098   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:58.264283   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:58.264283   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:58.264283   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:58.269964   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:58.271256   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:58.271256   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:58.271256   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:58.271256   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:58.275853   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:58.760704   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:58.760704   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:58.760704   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:58.760704   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:58.765762   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:58.767089   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:58.767160   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:58.767160   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:58.767160   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:58.771741   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:31:58.772863   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:31:59.263254   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:59.263254   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:59.263254   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:59.263254   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:59.268733   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:59.270513   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:59.270513   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:59.270513   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:59.270513   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:59.276390   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:59.759685   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:31:59.759763   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:59.759763   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:59.759839   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:59.765062   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:31:59.766426   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:31:59.766426   12872 round_trippers.go:469] Request Headers:
	I0401 11:31:59.766486   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:31:59.766486   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:31:59.772628   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:00.257940   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:00.257940   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:00.257940   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:00.258071   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:00.262795   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:00.264354   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:00.264410   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:00.264410   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:00.264410   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:00.267723   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:00.756645   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:00.756767   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:00.756767   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:00.756767   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:00.761222   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:00.763633   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:00.763633   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:00.763633   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:00.763633   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:00.768238   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:01.258169   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:01.258169   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:01.258169   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:01.258169   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:01.265909   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:01.267104   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:01.267104   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:01.267104   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:01.267104   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:01.272724   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:01.273297   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:01.760219   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:01.760219   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:01.760219   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:01.760219   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:01.765936   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:01.768305   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:01.768305   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:01.768305   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:01.768376   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:01.777348   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:02.262823   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:02.262823   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:02.262899   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:02.262899   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:02.268250   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:02.270115   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:02.270161   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:02.270161   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:02.270161   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:02.275109   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:02.761714   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:02.761982   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:02.761982   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:02.761982   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:02.766588   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:02.768814   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:02.768814   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:02.768884   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:02.768884   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:02.774191   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:03.262612   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:03.262612   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:03.262612   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:03.262724   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:03.268026   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:03.269335   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:03.269534   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:03.269534   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:03.269617   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:03.274374   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:03.274374   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:03.749805   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:03.749805   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:03.749884   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:03.749884   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:03.755695   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:03.757557   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:03.757557   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:03.757657   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:03.757657   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:03.761831   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:04.252201   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:04.252201   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:04.252201   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:04.252201   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:04.261576   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:32:04.262940   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:04.263038   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:04.263038   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:04.263038   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:04.270471   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:04.754894   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:04.755017   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:04.755017   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:04.755017   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:04.760776   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:04.762073   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:04.762198   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:04.762198   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:04.762198   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:04.766868   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:05.262942   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:05.262942   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:05.262942   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:05.262942   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:05.268406   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:05.269335   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:05.269582   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:05.269582   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:05.269582   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:05.274035   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:05.275500   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:05.752450   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:05.752531   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:05.752531   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:05.752531   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:05.757571   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:05.759938   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:05.759938   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:05.759938   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:05.759938   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:05.766699   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:06.256107   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:06.256107   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:06.256107   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:06.256107   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:06.260784   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:06.262908   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:06.262908   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:06.262908   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:06.262908   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:06.266227   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:06.758078   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:06.758078   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:06.758078   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:06.758078   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:06.763605   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:06.764920   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:06.764979   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:06.764979   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:06.764979   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:06.769209   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:07.259553   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:07.259553   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:07.259641   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:07.259641   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:07.265238   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:07.266021   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:07.266021   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:07.266021   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:07.266021   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:07.271438   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:07.756532   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:07.756532   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:07.756689   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:07.756689   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:07.762542   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:07.764434   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:07.764434   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:07.764434   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:07.764434   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:07.769024   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:07.769906   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:08.255038   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:08.255038   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:08.255038   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:08.255038   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:08.261644   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:08.263248   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:08.263316   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:08.263316   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:08.263316   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:08.268580   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:08.756878   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:08.756878   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:08.756878   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:08.756878   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:08.761225   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:08.763125   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:08.763125   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:08.763228   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:08.763228   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:08.767441   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:09.262129   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:09.262129   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:09.262450   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:09.262450   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:09.268290   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:09.269373   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:09.269428   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:09.269428   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:09.269428   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:09.273304   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:09.758004   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:09.758004   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:09.758004   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:09.758004   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:09.767104   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:32:09.768115   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:09.768115   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:09.768115   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:09.768115   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:09.774157   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:09.775457   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:10.260384   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:10.260384   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:10.260384   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:10.260384   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:10.265984   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:10.267109   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:10.267199   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:10.267199   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:10.267199   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:10.271257   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:10.762987   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:10.763091   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:10.763091   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:10.763091   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:10.768681   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:10.770606   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:10.770606   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:10.770606   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:10.770606   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:10.781259   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:32:11.263963   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:11.263963   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:11.263963   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:11.263963   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:11.268382   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:11.270206   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:11.270206   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:11.270277   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:11.270277   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:11.274996   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:11.763256   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:11.763329   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:11.763329   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:11.763329   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:11.768128   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:11.770104   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:11.770104   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:11.770104   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:11.770104   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:11.775200   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:11.776034   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:12.250756   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:12.250756   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:12.250969   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:12.250969   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:12.257060   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:12.259096   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:12.259096   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:12.259096   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:12.259096   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:12.263248   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:12.751237   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:12.751367   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:12.751367   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:12.751367   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:12.777011   12872 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0401 11:32:12.778242   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:12.778242   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:12.778389   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:12.778389   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:12.787068   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:13.263571   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:13.263647   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:13.263647   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:13.263647   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:13.269018   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:13.270358   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:13.270358   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:13.270358   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:13.270358   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:13.274622   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:13.749647   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:13.749962   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:13.749962   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:13.750061   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:13.758305   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:13.759062   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:13.759062   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:13.759062   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:13.759062   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:13.763799   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:14.264129   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:14.264129   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:14.264129   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:14.264129   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:14.269370   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:14.271032   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:14.271032   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:14.271032   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:14.271032   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:14.274793   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:14.276164   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:14.749852   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:14.750053   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:14.750053   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:14.750053   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:14.755689   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:14.756763   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:14.756856   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:14.756856   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:14.756856   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:14.763833   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:15.253094   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:15.253153   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:15.253153   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:15.253153   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:15.258084   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:15.259100   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:15.259100   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:15.259100   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:15.259100   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:15.270475   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0401 11:32:15.754346   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:15.754416   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:15.754416   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:15.754416   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:15.763143   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:15.765147   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:15.765211   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:15.765269   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:15.765269   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:15.771879   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:16.254141   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:16.254141   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:16.254141   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:16.254141   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:16.258802   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:16.258802   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:16.258802   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:16.258802   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:16.258802   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:16.265792   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:16.757434   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:16.757434   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:16.757434   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:16.757434   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:16.763052   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:16.765268   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:16.765268   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:16.765373   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:16.765373   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:16.769663   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:16.770653   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:17.258568   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:17.258568   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:17.258568   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:17.258568   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:17.264206   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:17.265421   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:17.265504   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:17.265504   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:17.265504   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:17.270822   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:17.758997   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:17.759067   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:17.759067   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:17.759067   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:17.764398   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:17.766041   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:17.766041   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:17.766100   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:17.766100   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:17.770380   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:18.257515   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:18.257515   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:18.257515   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:18.257684   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:18.293622   12872 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0401 11:32:18.294614   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:18.294614   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:18.294614   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:18.294614   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:18.301901   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:18.757155   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:18.757155   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:18.757155   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:18.757155   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:18.762470   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:18.764357   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:18.764415   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:18.764415   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:18.764490   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:18.769884   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:18.770990   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:19.261112   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:19.261183   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:19.261183   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:19.261183   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:19.269734   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:19.271126   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:19.271126   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:19.271126   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:19.271126   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:19.275273   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:19.756129   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:19.756129   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:19.756129   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:19.756129   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:19.761493   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:19.762550   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:19.762550   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:19.762550   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:19.762550   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:19.766508   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:20.259141   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:20.259141   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:20.259141   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:20.259230   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:20.264386   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:20.265509   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:20.265509   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:20.265509   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:20.265509   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:20.270709   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:20.755610   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:20.755677   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:20.755677   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:20.755677   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:20.761016   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:20.762758   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:20.762758   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:20.762758   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:20.762758   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:20.768382   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:21.259197   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:21.259271   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:21.259271   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:21.259271   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:21.269351   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:32:21.270403   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:21.270403   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:21.270403   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:21.270403   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:21.275007   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:21.276003   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:21.759927   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:21.759927   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:21.759927   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:21.759927   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:21.765525   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:21.767410   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:21.767410   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:21.767410   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:21.767410   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:21.772695   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:22.258788   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:22.259038   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:22.259038   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:22.259038   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:22.264233   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:22.266348   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:22.266397   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:22.266397   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:22.266397   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:22.271101   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:22.756788   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:22.756874   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:22.756874   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:22.756874   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:22.761436   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:22.763593   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:22.763593   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:22.763593   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:22.763593   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:22.768402   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:23.253807   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:23.253807   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:23.253807   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:23.253807   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:23.259720   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:23.261185   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:23.261266   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:23.261266   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:23.261336   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:23.269751   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:23.757792   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:23.757792   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:23.757856   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:23.757856   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:23.765851   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:23.766440   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:23.766440   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:23.766440   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:23.766440   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:23.771997   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:23.771997   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:24.263668   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:24.263752   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:24.263752   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:24.263752   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:24.269679   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:24.270680   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:24.270680   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:24.270680   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:24.270680   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:24.282705   12872 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0401 11:32:24.753929   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:24.753929   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:24.754027   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:24.754027   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:24.757838   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:24.759602   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:24.760192   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:24.760192   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:24.760192   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:24.764564   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:25.260327   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:25.260383   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:25.260383   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:25.260383   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:25.265874   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:25.267133   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:25.267133   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:25.267133   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:25.267198   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:25.270936   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:25.762623   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:25.762728   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:25.762728   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:25.762728   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:25.768306   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:25.769789   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:25.769847   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:25.769847   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:25.769847   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:25.774238   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:25.775074   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:26.263943   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:26.263943   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:26.264177   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:26.264177   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:26.269508   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:26.270702   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:26.270780   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:26.270780   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:26.270780   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:26.276109   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:26.750999   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:26.751321   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:26.751321   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:26.751321   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:26.756111   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:26.757242   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:26.757318   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:26.757318   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:26.757318   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:26.761899   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:27.250866   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:27.250866   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:27.250866   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:27.250973   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:27.256293   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:27.258409   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:27.258543   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:27.258543   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:27.258543   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:27.262846   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:27.762962   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:27.762962   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:27.762962   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:27.762962   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:27.768140   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:27.769584   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:27.769584   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:27.769584   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:27.769584   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:27.774750   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:27.775875   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:28.250469   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:28.250587   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:28.250587   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:28.250587   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:28.256680   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:28.258306   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:28.258365   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:28.258365   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:28.258365   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:28.262270   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:28.751857   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:28.751939   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:28.751939   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:28.751939   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:28.758182   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:28.759437   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:28.759563   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:28.759563   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:28.759563   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:28.763801   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:29.259472   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:29.259472   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:29.259472   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:29.259472   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:29.265470   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:29.266396   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:29.266396   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:29.266473   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:29.266473   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:29.270620   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:29.757879   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:29.757958   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:29.757958   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:29.757958   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:29.763990   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:29.765229   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:29.765229   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:29.765229   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:29.765229   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:29.769782   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:30.259850   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:30.259850   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:30.259850   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:30.259944   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:30.265231   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:30.267155   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:30.267155   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:30.267155   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:30.267155   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:30.271792   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:30.272124   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:30.757080   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:30.757195   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:30.757195   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:30.757195   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:30.762137   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:30.763547   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:30.763650   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:30.763650   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:30.763650   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:30.768932   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:31.259726   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:31.259726   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:31.260015   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:31.260015   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:31.270596   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:32:31.272545   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:31.272661   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:31.272661   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:31.272661   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:31.277925   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:31.758211   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:31.758211   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:31.758211   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:31.758211   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:31.782687   12872 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0401 11:32:31.785025   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:31.785096   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:31.785118   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:31.785118   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:31.798965   12872 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0401 11:32:32.261307   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:32.261307   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:32.261307   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:32.261307   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:32.267923   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:32.269641   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:32.269641   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:32.269773   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:32.269773   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:32.273203   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:32.274664   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:32.749773   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:32.749883   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:32.749948   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:32.749948   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:32.757419   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:32.758923   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:32.758923   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:32.758923   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:32.759031   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:32.766189   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:33.249897   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:33.249964   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:33.249964   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:33.249964   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:33.253686   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:33.256081   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:33.256081   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:33.256138   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:33.256138   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:33.261201   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:33.764528   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:33.764793   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:33.764793   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:33.764793   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:33.770328   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:33.772417   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:33.772417   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:33.772500   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:33.772500   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:33.778953   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:34.249713   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:34.249771   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:34.249771   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:34.249771   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:34.256360   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:34.257122   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:34.257122   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:34.257122   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:34.257122   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:34.262460   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:34.765090   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:34.765090   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:34.765173   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:34.765173   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:34.769495   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:34.770838   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:34.770838   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:34.770838   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:34.771371   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:34.782518   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0401 11:32:34.784433   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:35.252221   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:35.252221   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:35.252367   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:35.252367   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:35.258103   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:35.259416   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:35.259483   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:35.259483   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:35.259483   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:35.264736   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:35.753272   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:35.753351   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:35.753412   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:35.753412   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:35.759853   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:35.761184   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:35.761272   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:35.761272   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:35.761331   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:35.765926   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:36.251681   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:36.251681   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:36.251681   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:36.251681   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:36.258169   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:36.259313   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:36.259313   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:36.259313   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:36.259313   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:36.260723   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 11:32:36.754119   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:36.754119   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:36.754186   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:36.754186   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:36.760085   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:36.761868   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:36.761966   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:36.761966   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:36.761966   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:36.766854   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:37.255294   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:37.255294   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:37.255554   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:37.255554   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:37.260487   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:37.262065   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:37.262141   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:37.262141   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:37.262141   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:37.267224   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:37.268381   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:37.758167   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:37.758167   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:37.758167   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:37.758167   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:37.763749   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:37.766052   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:37.766113   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:37.766113   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:37.766113   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:37.769381   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:38.255056   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:38.255230   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:38.255230   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:38.255230   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:38.260340   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:38.261851   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:38.261851   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:38.261851   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:38.261851   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:38.266838   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:38.757616   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:38.757616   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:38.757616   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:38.757616   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:38.762188   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:38.764508   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:38.764508   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:38.764508   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:38.764508   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:38.769101   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:39.262737   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:39.262858   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:39.262858   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:39.262858   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:39.270257   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:39.271783   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:39.271839   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:39.271839   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:39.271839   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:39.276647   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:39.278517   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:39.762932   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:39.762932   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:39.762932   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:39.762932   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:39.769561   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:39.770649   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:39.770805   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:39.770805   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:39.770805   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:39.776384   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:40.265228   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:40.265228   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:40.265228   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:40.265228   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:40.269608   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:40.271207   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:40.271207   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:40.271207   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:40.271207   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:40.278736   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:40.764518   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:40.764616   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:40.764616   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:40.764690   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:40.770694   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:40.771775   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:40.771775   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:40.771775   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:40.771888   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:40.794130   12872 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0401 11:32:41.253602   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:41.253677   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:41.253747   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:41.253747   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:41.258363   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:41.259347   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:41.259347   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:41.259347   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:41.259347   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:41.265678   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:41.754483   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:41.754483   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:41.754483   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:41.754483   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:41.758482   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:41.760315   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:41.760315   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:41.760315   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:41.760315   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:41.773011   12872 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0401 11:32:41.773629   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:42.257893   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:42.257893   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:42.257893   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:42.257893   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:42.263800   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:42.264820   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:42.264820   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:42.264820   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:42.264820   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:42.268838   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:42.749895   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:42.749956   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:42.750013   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:42.750013   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:42.753473   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:42.755470   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:42.755470   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:42.755470   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:42.755470   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:42.763474   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:43.252435   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:43.252435   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:43.252435   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:43.252435   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:43.258041   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:43.259407   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:43.259461   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:43.259461   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:43.259461   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:43.263438   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:43.758515   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:43.758515   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:43.758515   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:43.758515   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:43.764505   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:43.768404   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:43.768509   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:43.768509   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:43.768570   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:43.775355   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:43.776698   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:44.250292   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:44.250292   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:44.250292   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:44.250292   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:44.256710   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:44.257745   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:44.257745   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:44.257745   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:44.257745   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:44.262584   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:44.757546   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:44.757546   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:44.757546   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:44.757546   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:44.762224   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:44.763062   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:44.763062   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:44.763062   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:44.763062   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:44.766769   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:45.256728   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:45.257074   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:45.257074   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:45.257074   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:45.262150   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:45.264062   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:45.264062   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:45.264062   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:45.264062   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:45.268521   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:45.756449   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:45.756575   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:45.756575   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:45.756575   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:45.761545   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:45.763458   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:45.763458   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:45.763458   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:45.763458   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:45.768425   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:46.263925   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:46.264186   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:46.264385   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:46.264416   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:46.269386   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:46.270387   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:46.270387   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:46.270387   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:46.270387   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:46.277682   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:46.279162   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:46.762450   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:46.762450   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:46.762450   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:46.762534   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:46.767679   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:46.768984   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:46.769042   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:46.769042   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:46.769042   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:46.775266   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:47.263064   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:47.263365   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:47.263365   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:47.263365   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:47.268103   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:47.269761   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:47.269878   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:47.269878   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:47.269878   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:47.274625   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:47.763866   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:47.763866   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:47.763866   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:47.763866   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:47.769524   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:47.771458   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:47.771458   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:47.771458   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:47.771458   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:47.777933   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:48.264190   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:48.264434   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:48.264434   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:48.264434   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:48.269960   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:48.271100   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:48.271100   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:48.271100   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:48.271100   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:48.274739   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:48.749585   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:48.749655   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:48.749655   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:48.749655   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:48.756025   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:48.757957   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:48.757957   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:48.757957   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:48.757957   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:48.761607   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:48.762785   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:49.251340   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:49.251417   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:49.251417   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:49.251417   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:49.256792   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:49.259290   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:49.259290   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:49.259290   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:49.259290   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:49.265864   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:49.750232   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:49.750382   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:49.750382   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:49.750382   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:49.755751   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:49.757413   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:49.757469   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:49.757469   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:49.757469   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:49.762039   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:50.251443   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:50.251443   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:50.251443   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:50.251443   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:50.256534   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:50.258209   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:50.258209   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:50.258209   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:50.258209   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:50.262983   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:50.750520   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:50.750520   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:50.750520   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:50.750520   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:50.759164   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:32:50.761133   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:50.761257   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:50.761448   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:50.761539   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:50.765484   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:50.766070   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:51.254043   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:51.254103   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:51.254161   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:51.254161   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:51.260915   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:51.262362   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:51.262431   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:51.262431   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:51.262431   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:51.268045   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:51.755238   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:51.755238   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:51.755238   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:51.755238   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:51.762096   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:51.763880   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:51.764083   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:51.764083   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:51.764083   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:51.769102   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:52.258952   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:52.259143   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:52.259143   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:52.259143   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:52.265564   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:52.266393   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:52.266937   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:52.266937   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:52.266937   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:52.271771   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:52.758508   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:52.758508   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:52.758508   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:52.758508   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:52.764252   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:52.765648   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:52.765704   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:52.765704   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:52.765704   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:52.769972   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:52.771077   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:53.249644   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:53.249866   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:53.249866   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:53.249866   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:53.255286   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:53.257162   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:53.257162   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:53.257162   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:53.257162   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:53.261405   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:53.763103   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:53.763103   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:53.763103   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:53.763103   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:53.768808   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:53.769860   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:53.769860   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:53.769860   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:53.769860   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:53.774574   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:54.249610   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:54.249610   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:54.249610   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:54.249610   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:54.255213   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:54.257695   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:54.257761   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:54.257761   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:54.257761   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:54.261880   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:54.763740   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:54.764021   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:54.764021   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:54.764021   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:54.768875   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:54.770392   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:54.770963   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:54.770963   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:54.770963   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:54.781668   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:32:54.785438   12872 pod_ready.go:102] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:55.263110   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:55.263409   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:55.263409   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:55.263409   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:55.277984   12872 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0401 11:32:55.279624   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:55.279624   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:55.279624   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:55.279624   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:55.284290   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:55.769490   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:55.769557   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:55.769557   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:55.769557   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:55.770139   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0401 11:32:55.775716   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:55.775716   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:55.775716   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:55.775716   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:55.778419   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 11:32:56.265398   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:56.265398   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:56.265398   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:56.265398   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:56.270654   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:56.272325   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:56.272384   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:56.272384   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:56.272384   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:56.281537   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:32:56.756443   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:56.756443   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:56.756630   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:56.756630   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:56.760917   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:56.763263   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:56.763324   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:56.763324   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:56.763324   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:56.767029   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:57.262705   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401500-m03
	I0401 11:32:57.262705   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.262705   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.262705   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.267441   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.269453   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:57.269453   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.269453   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.269453   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.274474   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:57.275456   12872 pod_ready.go:92] pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 11:32:57.275456   12872 pod_ready.go:81] duration metric: took 1m6.5262173s for pod "kube-apiserver-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.275456   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.275594   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500
	I0401 11:32:57.275655   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.275655   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.275655   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.282385   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:57.283695   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:32:57.283770   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.283770   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.283770   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.288264   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.288657   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:32:57.289183   12872 pod_ready.go:81] duration metric: took 13.7276ms for pod "kube-controller-manager-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.289183   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.289430   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m02
	I0401 11:32:57.289430   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.289430   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.289430   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.294032   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.295596   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:32:57.295596   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.295596   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.295596   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.300969   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.301879   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:32:57.301938   12872 pod_ready.go:81] duration metric: took 12.6953ms for pod "kube-controller-manager-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.301938   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:32:57.302069   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:57.302111   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.302111   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.302111   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.305976   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:57.306919   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:57.306919   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.306919   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.306919   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.311071   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:57.812612   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:57.812647   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.812691   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.812691   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.818844   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:57.820008   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:57.820008   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:57.820539   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:57.820539   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:57.824148   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:58.312932   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:58.312978   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:58.312978   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:58.313023   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:58.318305   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:32:58.319502   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:58.319556   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:58.319556   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:58.319556   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:58.324368   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:32:58.812370   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:58.812483   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:58.812483   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:58.812483   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:58.818911   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:58.820518   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:58.820574   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:58.820634   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:58.820634   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:58.827748   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:59.314797   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:59.314797   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:59.314797   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:59.314893   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:59.322676   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:32:59.324278   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:59.324278   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:59.324340   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:59.324340   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:59.328308   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:32:59.329718   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:32:59.815381   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:32:59.815530   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:59.815530   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:59.815530   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:59.821681   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:32:59.822886   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:32:59.822886   12872 round_trippers.go:469] Request Headers:
	I0401 11:32:59.822886   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:32:59.822886   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:32:59.827532   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:00.317123   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:00.317123   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:00.317123   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:00.317123   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:00.323325   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:00.325366   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:00.325488   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:00.325488   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:00.325520   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:00.329137   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:00.802753   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:00.802753   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:00.802860   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:00.802860   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:00.809084   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:00.810429   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:00.810429   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:00.810512   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:00.810512   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:00.813726   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:01.315138   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:01.315475   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:01.315475   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:01.315475   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:01.321812   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:01.323421   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:01.323421   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:01.323421   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:01.323421   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:01.328092   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:01.802654   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:01.802654   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:01.802654   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:01.802654   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:01.809300   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:01.810999   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:01.810999   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:01.811088   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:01.811088   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:01.815515   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:01.816621   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:33:02.306438   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:02.306553   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:02.306553   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:02.306553   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:02.311885   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:02.312896   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:02.312896   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:02.312896   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:02.312896   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:02.317751   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:02.806329   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:02.806481   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:02.806540   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:02.806540   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:02.810946   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:02.812949   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:02.812949   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:02.812949   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:02.812949   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:02.817267   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:03.307461   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:03.307461   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:03.307461   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:03.307699   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:03.312430   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:03.314380   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:03.314380   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:03.314380   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:03.314380   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:03.318977   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:03.805172   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:03.805172   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:03.805172   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:03.805172   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:03.810605   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:03.812532   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:03.812592   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:03.812592   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:03.812592   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:03.816481   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:03.818011   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:33:04.309723   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:04.309723   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:04.309723   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:04.309723   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:04.315548   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:04.316952   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:04.316952   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:04.316952   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:04.316952   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:04.323146   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:04.810654   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:04.810654   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:04.810654   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:04.810654   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:04.816227   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:04.817556   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:04.817556   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:04.817624   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:04.817624   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:04.821474   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:05.308035   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:05.308072   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:05.308072   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:05.308072   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:05.319868   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0401 11:33:05.321008   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:05.321008   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:05.321008   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:05.321008   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:05.325720   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:05.806864   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:05.806864   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:05.806864   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:05.806864   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:05.819639   12872 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0401 11:33:05.820852   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:05.820911   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:05.820911   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:05.820968   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:05.824148   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:05.825503   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:33:06.307490   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:06.307728   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:06.307728   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:06.307728   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:06.315563   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 11:33:06.316676   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:06.316676   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:06.316676   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:06.316676   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:06.321281   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:06.806080   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:06.806080   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:06.806080   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:06.806080   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:06.811694   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:06.813178   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:06.813178   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:06.813178   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:06.813178   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:06.819765   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:07.308436   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:07.308561   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:07.308561   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:07.308561   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:07.313986   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:07.316309   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:07.316309   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:07.316309   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:07.316309   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:07.398150   12872 round_trippers.go:574] Response Status: 200 OK in 81 milliseconds
	I0401 11:33:07.813041   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:07.813041   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:07.813041   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:07.813041   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:07.818591   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:07.819857   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:07.819857   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:07.819857   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:07.819857   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:07.824805   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:08.315818   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:08.315818   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:08.315818   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:08.315818   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:08.322023   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:08.323094   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:08.323094   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:08.323094   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:08.323094   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:08.326886   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:08.328039   12872 pod_ready.go:102] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 11:33:08.805316   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:08.805316   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:08.805316   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:08.805316   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:08.814304   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:33:08.816116   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:08.816116   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:08.816116   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:08.816116   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:08.821110   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:09.306658   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:09.306745   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:09.306745   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:09.306745   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:09.312014   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:09.313210   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:09.313210   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:09.313210   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:09.313210   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:09.317798   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:09.805561   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:09.805561   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:09.805561   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:09.805561   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:09.812319   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:09.813509   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:09.813509   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:09.813509   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:09.813509   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:09.817099   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:10.305608   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:10.305608   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.305691   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.305691   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.310882   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:10.312576   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:10.312695   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.312695   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.312695   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.318132   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:10.811690   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401500-m03
	I0401 11:33:10.811690   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.811690   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.811690   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.817361   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:10.818794   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:10.818794   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.818794   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.818794   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.823401   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:10.823893   12872 pod_ready.go:92] pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.823893   12872 pod_ready.go:81] duration metric: took 13.5218606s for pod "kube-controller-manager-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.823893   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28zds" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.823893   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zds
	I0401 11:33:10.823893   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.823893   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.823893   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.829079   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:10.831010   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:33:10.831126   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.831126   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.831126   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.840574   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 11:33:10.840574   12872 pod_ready.go:92] pod "kube-proxy-28zds" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.840574   12872 pod_ready.go:81] duration metric: took 16.6807ms for pod "kube-proxy-28zds" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.840574   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccgpw" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.841397   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ccgpw
	I0401 11:33:10.841514   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.841514   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.841514   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.845532   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:10.847068   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:10.847236   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.847236   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.847353   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.851950   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:10.853103   12872 pod_ready.go:92] pod "kube-proxy-ccgpw" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.853181   12872 pod_ready.go:81] duration metric: took 12.5283ms for pod "kube-proxy-ccgpw" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.853181   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqcpv" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.853257   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqcpv
	I0401 11:33:10.853257   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.853257   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.853257   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.859719   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:10.861086   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:33:10.861139   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.861139   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.861139   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.864432   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:10.866119   12872 pod_ready.go:92] pod "kube-proxy-hqcpv" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.866162   12872 pod_ready.go:81] duration metric: took 12.9811ms for pod "kube-proxy-hqcpv" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.866205   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.866255   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500
	I0401 11:33:10.866323   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.866323   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.866323   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.870216   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 11:33:10.871442   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500
	I0401 11:33:10.871442   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:10.871442   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:10.871442   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:10.876032   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 11:33:10.876808   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:10.876897   12872 pod_ready.go:81] duration metric: took 10.6923ms for pod "kube-scheduler-ha-401500" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:10.876897   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:11.017444   12872 request.go:629] Waited for 140.4767ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m02
	I0401 11:33:11.017751   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m02
	I0401 11:33:11.017751   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:11.017751   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:11.017751   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:11.026510   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 11:33:11.221802   12872 request.go:629] Waited for 194.3132ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:33:11.222073   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m02
	I0401 11:33:11.222073   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:11.222073   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:11.222073   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:11.227658   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:11.228695   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:11.228750   12872 pod_ready.go:81] duration metric: took 351.8502ms for pod "kube-scheduler-ha-401500-m02" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:11.228808   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:11.424540   12872 request.go:629] Waited for 195.6764ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m03
	I0401 11:33:11.424746   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401500-m03
	I0401 11:33:11.424746   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:11.424746   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:11.424746   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:11.430964   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 11:33:11.615184   12872 request.go:629] Waited for 182.6737ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:11.615550   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes/ha-401500-m03
	I0401 11:33:11.615550   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:11.615550   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:11.615550   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:11.621131   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:11.623152   12872 pod_ready.go:92] pod "kube-scheduler-ha-401500-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 11:33:11.623152   12872 pod_ready.go:81] duration metric: took 394.3411ms for pod "kube-scheduler-ha-401500-m03" in "kube-system" namespace to be "Ready" ...
	I0401 11:33:11.623252   12872 pod_ready.go:38] duration metric: took 1m25.2803693s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:33:11.623252   12872 api_server.go:52] waiting for apiserver process to appear ...
	I0401 11:33:11.635956   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0401 11:33:11.662956   12872 logs.go:276] 2 containers: [62e15884a85d b7c4892b0a5d]
	I0401 11:33:11.671938   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0401 11:33:11.700969   12872 logs.go:276] 1 containers: [b628ead59f77]
	I0401 11:33:11.709938   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0401 11:33:11.737948   12872 logs.go:276] 0 containers: []
	W0401 11:33:11.737948   12872 logs.go:278] No container was found matching "coredns"
	I0401 11:33:11.746937   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0401 11:33:11.775811   12872 logs.go:276] 1 containers: [d08e16bb0ded]
	I0401 11:33:11.788737   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0401 11:33:11.832987   12872 logs.go:276] 1 containers: [cd0b52822b82]
	I0401 11:33:11.843484   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0401 11:33:11.873827   12872 logs.go:276] 2 containers: [fc98c6d4ad09 3291e9558a9b]
	I0401 11:33:11.885750   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0401 11:33:11.911292   12872 logs.go:276] 1 containers: [db14ad1d26da]
	I0401 11:33:11.912375   12872 logs.go:123] Gathering logs for kube-apiserver [b7c4892b0a5d] ...
	I0401 11:33:11.912375   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7c4892b0a5d"
	I0401 11:33:12.000710   12872 logs.go:123] Gathering logs for etcd [b628ead59f77] ...
	I0401 11:33:12.000710   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b628ead59f77"
	I0401 11:33:12.064608   12872 logs.go:123] Gathering logs for kube-scheduler [d08e16bb0ded] ...
	I0401 11:33:12.064608   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08e16bb0ded"
	I0401 11:33:12.118358   12872 logs.go:123] Gathering logs for kube-controller-manager [fc98c6d4ad09] ...
	I0401 11:33:12.118358   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc98c6d4ad09"
	I0401 11:33:12.169313   12872 logs.go:123] Gathering logs for kube-controller-manager [3291e9558a9b] ...
	I0401 11:33:12.169313   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3291e9558a9b"
	I0401 11:33:12.204067   12872 logs.go:123] Gathering logs for dmesg ...
	I0401 11:33:12.204067   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:33:12.257145   12872 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:33:12.257299   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:33:12.904903   12872 logs.go:123] Gathering logs for kube-apiserver [62e15884a85d] ...
	I0401 11:33:12.904903   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e15884a85d"
	I0401 11:33:12.951094   12872 logs.go:123] Gathering logs for kindnet [db14ad1d26da] ...
	I0401 11:33:12.951094   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db14ad1d26da"
	I0401 11:33:12.991685   12872 logs.go:123] Gathering logs for Docker ...
	I0401 11:33:12.991814   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0401 11:33:13.069278   12872 logs.go:123] Gathering logs for kubelet ...
	I0401 11:33:13.069278   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:33:13.146727   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:13.146727   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:13.147725   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:13.147871   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:13.168730   12872 logs.go:123] Gathering logs for kube-proxy [cd0b52822b82] ...
	I0401 11:33:13.168730   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd0b52822b82"
	I0401 11:33:13.210442   12872 logs.go:123] Gathering logs for container status ...
	I0401 11:33:13.210514   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:33:13.322647   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:13.322789   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:33:13.322918   12872 out.go:239] X Problems detected in kubelet:
	W0401 11:33:13.323003   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:13.323003   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:13.323047   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:13.323047   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:13.323047   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:13.323047   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:33:23.344682   12872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 11:33:23.375483   12872 api_server.go:72] duration metric: took 1m37.4842717s to wait for apiserver process to appear ...
	I0401 11:33:23.375483   12872 api_server.go:88] waiting for apiserver healthz status ...
	I0401 11:33:23.388456   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0401 11:33:23.418718   12872 logs.go:276] 2 containers: [62e15884a85d b7c4892b0a5d]
	I0401 11:33:23.428984   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0401 11:33:23.452693   12872 logs.go:276] 1 containers: [b628ead59f77]
	I0401 11:33:23.463666   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0401 11:33:23.488468   12872 logs.go:276] 0 containers: []
	W0401 11:33:23.488468   12872 logs.go:278] No container was found matching "coredns"
	I0401 11:33:23.499029   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0401 11:33:23.526218   12872 logs.go:276] 1 containers: [d08e16bb0ded]
	I0401 11:33:23.536258   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0401 11:33:23.561780   12872 logs.go:276] 1 containers: [cd0b52822b82]
	I0401 11:33:23.571387   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0401 11:33:23.594293   12872 logs.go:276] 2 containers: [fc98c6d4ad09 3291e9558a9b]
	I0401 11:33:23.603990   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0401 11:33:23.636657   12872 logs.go:276] 1 containers: [db14ad1d26da]
	I0401 11:33:23.636747   12872 logs.go:123] Gathering logs for kube-proxy [cd0b52822b82] ...
	I0401 11:33:23.636747   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd0b52822b82"
	I0401 11:33:23.665753   12872 logs.go:123] Gathering logs for kube-controller-manager [fc98c6d4ad09] ...
	I0401 11:33:23.665753   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc98c6d4ad09"
	I0401 11:33:23.718225   12872 logs.go:123] Gathering logs for container status ...
	I0401 11:33:23.718225   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:33:23.843825   12872 logs.go:123] Gathering logs for kubelet ...
	I0401 11:33:23.843914   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:33:23.916302   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:23.916302   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:23.917017   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:23.917397   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:23.938137   12872 logs.go:123] Gathering logs for dmesg ...
	I0401 11:33:23.938137   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:33:23.965704   12872 logs.go:123] Gathering logs for kube-scheduler [d08e16bb0ded] ...
	I0401 11:33:23.965704   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08e16bb0ded"
	I0401 11:33:24.038052   12872 logs.go:123] Gathering logs for etcd [b628ead59f77] ...
	I0401 11:33:24.038052   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b628ead59f77"
	I0401 11:33:24.097077   12872 logs.go:123] Gathering logs for kube-controller-manager [3291e9558a9b] ...
	I0401 11:33:24.097077   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3291e9558a9b"
	I0401 11:33:24.130240   12872 logs.go:123] Gathering logs for kindnet [db14ad1d26da] ...
	I0401 11:33:24.130346   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db14ad1d26da"
	I0401 11:33:24.164959   12872 logs.go:123] Gathering logs for Docker ...
	I0401 11:33:24.165038   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0401 11:33:24.242554   12872 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:33:24.242554   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:33:24.525210   12872 logs.go:123] Gathering logs for kube-apiserver [62e15884a85d] ...
	I0401 11:33:24.525210   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e15884a85d"
	I0401 11:33:24.568905   12872 logs.go:123] Gathering logs for kube-apiserver [b7c4892b0a5d] ...
	I0401 11:33:24.568905   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7c4892b0a5d"
	I0401 11:33:24.661393   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:24.661393   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:33:24.661393   12872 out.go:239] X Problems detected in kubelet:
	W0401 11:33:24.661393   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:24.661393   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:24.661710   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:24.661710   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:24.661710   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:24.661793   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:33:34.686851   12872 api_server.go:253] Checking apiserver healthz at https://172.19.153.73:8443/healthz ...
	I0401 11:33:34.694523   12872 api_server.go:279] https://172.19.153.73:8443/healthz returned 200:
	ok
	I0401 11:33:34.694523   12872 round_trippers.go:463] GET https://172.19.153.73:8443/version
	I0401 11:33:34.694705   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:34.694740   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:34.694740   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:34.695859   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 11:33:34.695859   12872 api_server.go:141] control plane version: v1.29.3
	I0401 11:33:34.695859   12872 api_server.go:131] duration metric: took 11.3202957s to wait for apiserver health ...
	I0401 11:33:34.695859   12872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 11:33:34.706995   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0401 11:33:34.737571   12872 logs.go:276] 2 containers: [62e15884a85d b7c4892b0a5d]
	I0401 11:33:34.748310   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0401 11:33:34.778031   12872 logs.go:276] 1 containers: [b628ead59f77]
	I0401 11:33:34.786732   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0401 11:33:34.819067   12872 logs.go:276] 0 containers: []
	W0401 11:33:34.819164   12872 logs.go:278] No container was found matching "coredns"
	I0401 11:33:34.829696   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0401 11:33:34.854494   12872 logs.go:276] 1 containers: [d08e16bb0ded]
	I0401 11:33:34.862141   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0401 11:33:34.898930   12872 logs.go:276] 1 containers: [cd0b52822b82]
	I0401 11:33:34.911428   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0401 11:33:34.937977   12872 logs.go:276] 2 containers: [fc98c6d4ad09 3291e9558a9b]
	I0401 11:33:34.950507   12872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0401 11:33:34.978431   12872 logs.go:276] 1 containers: [db14ad1d26da]
	I0401 11:33:34.978431   12872 logs.go:123] Gathering logs for kindnet [db14ad1d26da] ...
	I0401 11:33:34.978431   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db14ad1d26da"
	I0401 11:33:35.014419   12872 logs.go:123] Gathering logs for Docker ...
	I0401 11:33:35.014419   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0401 11:33:35.093393   12872 logs.go:123] Gathering logs for container status ...
	I0401 11:33:35.093393   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:33:35.209273   12872 logs.go:123] Gathering logs for dmesg ...
	I0401 11:33:35.209273   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:33:35.240517   12872 logs.go:123] Gathering logs for kube-apiserver [b7c4892b0a5d] ...
	I0401 11:33:35.240622   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7c4892b0a5d"
	I0401 11:33:35.331847   12872 logs.go:123] Gathering logs for etcd [b628ead59f77] ...
	I0401 11:33:35.331847   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b628ead59f77"
	I0401 11:33:35.383048   12872 logs.go:123] Gathering logs for kube-controller-manager [fc98c6d4ad09] ...
	I0401 11:33:35.383048   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc98c6d4ad09"
	I0401 11:33:35.454641   12872 logs.go:123] Gathering logs for kube-proxy [cd0b52822b82] ...
	I0401 11:33:35.454641   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd0b52822b82"
	I0401 11:33:35.488141   12872 logs.go:123] Gathering logs for kube-controller-manager [3291e9558a9b] ...
	I0401 11:33:35.488202   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3291e9558a9b"
	I0401 11:33:35.521732   12872 logs.go:123] Gathering logs for kubelet ...
	I0401 11:33:35.521732   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:33:35.592733   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:35.593137   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:35.593524   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:35.593750   12872 logs.go:138] Found kubelet problem: Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:35.616066   12872 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:33:35.616066   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:33:35.938008   12872 logs.go:123] Gathering logs for kube-apiserver [62e15884a85d] ...
	I0401 11:33:35.938008   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e15884a85d"
	I0401 11:33:35.983969   12872 logs.go:123] Gathering logs for kube-scheduler [d08e16bb0ded] ...
	I0401 11:33:35.984049   12872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08e16bb0ded"
	I0401 11:33:36.039526   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:36.039526   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:33:36.039526   12872 out.go:239] X Problems detected in kubelet:
	W0401 11:33:36.039526   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790138    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:36.039526   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790197    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:33:36.039526   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: W0401 11:31:30.790423    2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 11:33:36.039526   12872 out.go:239]   Apr 01 11:31:30 ha-401500-m03 kubelet[2095]: E0401 11:31:30.790486    2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-401500-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	I0401 11:33:36.039526   12872 out.go:304] Setting ErrFile to fd 576...
	I0401 11:33:36.040571   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:33:46.059767   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:33:46.059767   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:46.059767   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:46.059767   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:46.076244   12872 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0401 11:33:46.091991   12872 system_pods.go:59] 24 kube-system pods found
	I0401 11:33:46.091991   12872 system_pods.go:61] "coredns-76f75df574-4xvlf" [d2a6344b-f0f6-49a1-9135-2a2ae21228b9] Running
	I0401 11:33:46.091991   12872 system_pods.go:61] "coredns-76f75df574-vjslq" [81ef7e9b-acf1-411f-8f00-bb9fea08056f] Running
	I0401 11:33:46.092375   12872 system_pods.go:61] "etcd-ha-401500" [532eef29-0a6a-4b38-82a7-522c28eb8d64] Running
	I0401 11:33:46.092403   12872 system_pods.go:61] "etcd-ha-401500-m02" [258b489e-95c8-4bfc-931f-2392bd619257] Running
	I0401 11:33:46.092403   12872 system_pods.go:61] "etcd-ha-401500-m03" [12ed1798-15e1-45fb-bc01-cb7d8cb56be1] Running
	I0401 11:33:46.092403   12872 system_pods.go:61] "kindnet-8f8ts" [bd227165-7098-4498-8ba6-6f903edfef84] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kindnet-92s2r" [5d6301b7-cb61-401f-9b6d-1a77775b65ac] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kindnet-v22wx" [86d50e2c-cb46-475b-9ec9-e16549903f65] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kube-apiserver-ha-401500" [bd79feb9-6db9-49ab-87ec-debf9556277f] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kube-apiserver-ha-401500-m02" [c092dcfe-f711-419d-b172-05670e1c4b53] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kube-apiserver-ha-401500-m03" [4e3c989c-2728-4eea-85f0-e98d51496a8e] Running
	I0401 11:33:46.092463   12872 system_pods.go:61] "kube-controller-manager-ha-401500" [aa7dc05b-ee68-49fa-9a08-60e079f62848] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-controller-manager-ha-401500-m02" [2755a2be-c5d2-4df7-9572-f2bde8aa9314] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-controller-manager-ha-401500-m03" [f16272c3-226f-4480-997f-4e3269042d2d] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-proxy-28zds" [bb38f484-6c10-4874-a3a7-dba22c1720a0] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-proxy-ccgpw" [e8debcf2-d756-4fc4-9931-102b1eef4ee5] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-proxy-hqcpv" [edf6bd75-05e1-479f-b190-13d867bb7ef5] Running
	I0401 11:33:46.092523   12872 system_pods.go:61] "kube-scheduler-ha-401500" [d727c9ec-579a-4449-90b1-86b790573abb] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-scheduler-ha-401500-m02" [b38ecb47-0b33-4432-a060-67e352fc9d73] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-scheduler-ha-401500-m03" [bbe0b265-6b78-4984-9190-014904684180] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-vip-ha-401500" [b1386d4f-d6ab-4cfd-91e4-39539d0e2854] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-vip-ha-401500-m02" [d5cc5b36-52ad-4da8-b75a-8cfce3b3391f] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "kube-vip-ha-401500-m03" [55b3a39a-a4d7-435b-ba14-3139eef4fef8] Running
	I0401 11:33:46.092600   12872 system_pods.go:61] "storage-provisioner" [373b3186-34e3-4ae2-8ddf-4701d665e768] Running
	I0401 11:33:46.092673   12872 system_pods.go:74] duration metric: took 11.3966599s to wait for pod list to return data ...
	I0401 11:33:46.092673   12872 default_sa.go:34] waiting for default service account to be created ...
	I0401 11:33:46.092840   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/default/serviceaccounts
	I0401 11:33:46.092840   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:46.092840   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:46.092966   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:46.098848   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:46.099795   12872 default_sa.go:45] found service account: "default"
	I0401 11:33:46.099795   12872 default_sa.go:55] duration metric: took 7.1219ms for default service account to be created ...
	I0401 11:33:46.099908   12872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 11:33:46.099908   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/namespaces/kube-system/pods
	I0401 11:33:46.100028   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:46.100028   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:46.100028   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:46.110230   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 11:33:46.121996   12872 system_pods.go:86] 24 kube-system pods found
	I0401 11:33:46.121996   12872 system_pods.go:89] "coredns-76f75df574-4xvlf" [d2a6344b-f0f6-49a1-9135-2a2ae21228b9] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "coredns-76f75df574-vjslq" [81ef7e9b-acf1-411f-8f00-bb9fea08056f] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "etcd-ha-401500" [532eef29-0a6a-4b38-82a7-522c28eb8d64] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "etcd-ha-401500-m02" [258b489e-95c8-4bfc-931f-2392bd619257] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "etcd-ha-401500-m03" [12ed1798-15e1-45fb-bc01-cb7d8cb56be1] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kindnet-8f8ts" [bd227165-7098-4498-8ba6-6f903edfef84] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kindnet-92s2r" [5d6301b7-cb61-401f-9b6d-1a77775b65ac] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kindnet-v22wx" [86d50e2c-cb46-475b-9ec9-e16549903f65] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-apiserver-ha-401500" [bd79feb9-6db9-49ab-87ec-debf9556277f] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-apiserver-ha-401500-m02" [c092dcfe-f711-419d-b172-05670e1c4b53] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-apiserver-ha-401500-m03" [4e3c989c-2728-4eea-85f0-e98d51496a8e] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-controller-manager-ha-401500" [aa7dc05b-ee68-49fa-9a08-60e079f62848] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-controller-manager-ha-401500-m02" [2755a2be-c5d2-4df7-9572-f2bde8aa9314] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-controller-manager-ha-401500-m03" [f16272c3-226f-4480-997f-4e3269042d2d] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-proxy-28zds" [bb38f484-6c10-4874-a3a7-dba22c1720a0] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-proxy-ccgpw" [e8debcf2-d756-4fc4-9931-102b1eef4ee5] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-proxy-hqcpv" [edf6bd75-05e1-479f-b190-13d867bb7ef5] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-scheduler-ha-401500" [d727c9ec-579a-4449-90b1-86b790573abb] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-scheduler-ha-401500-m02" [b38ecb47-0b33-4432-a060-67e352fc9d73] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-scheduler-ha-401500-m03" [bbe0b265-6b78-4984-9190-014904684180] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-vip-ha-401500" [b1386d4f-d6ab-4cfd-91e4-39539d0e2854] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-vip-ha-401500-m02" [d5cc5b36-52ad-4da8-b75a-8cfce3b3391f] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "kube-vip-ha-401500-m03" [55b3a39a-a4d7-435b-ba14-3139eef4fef8] Running
	I0401 11:33:46.121996   12872 system_pods.go:89] "storage-provisioner" [373b3186-34e3-4ae2-8ddf-4701d665e768] Running
	I0401 11:33:46.121996   12872 system_pods.go:126] duration metric: took 22.0875ms to wait for k8s-apps to be running ...
	I0401 11:33:46.121996   12872 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 11:33:46.134473   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 11:33:46.164622   12872 system_svc.go:56] duration metric: took 41.6057ms WaitForService to wait for kubelet
	I0401 11:33:46.164622   12872 kubeadm.go:576] duration metric: took 2m0.2732495s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:33:46.164702   12872 node_conditions.go:102] verifying NodePressure condition ...
	I0401 11:33:46.164702   12872 round_trippers.go:463] GET https://172.19.153.73:8443/api/v1/nodes
	I0401 11:33:46.164702   12872 round_trippers.go:469] Request Headers:
	I0401 11:33:46.164702   12872 round_trippers.go:473]     Accept: application/json, */*
	I0401 11:33:46.164702   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 11:33:46.169745   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 11:33:46.172452   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:33:46.172510   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:33:46.172510   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:33:46.172510   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:33:46.172510   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 11:33:46.172510   12872 node_conditions.go:123] node cpu capacity is 2
	I0401 11:33:46.172510   12872 node_conditions.go:105] duration metric: took 7.8082ms to run NodePressure ...
	I0401 11:33:46.172510   12872 start.go:240] waiting for startup goroutines ...
	I0401 11:33:46.172626   12872 start.go:254] writing updated cluster config ...
	I0401 11:33:46.186814   12872 ssh_runner.go:195] Run: rm -f paused
	I0401 11:33:46.345109   12872 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 11:33:46.347954   12872 out.go:177] * Done! kubectl is now configured to use "ha-401500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.476735925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.483332845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 cri-dockerd[1236]: time="2024-04-01T11:23:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ec0ebe3869c2e14a6d44daf4e8f82997e2dbe78e230e4717f6b006a33724e5e/resolv.conf as [nameserver 172.19.144.1]"
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.838583449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.838694147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.838715847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:23:51 ha-401500 dockerd[1351]: time="2024-04-01T11:23:51.839475438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:34:27 ha-401500 dockerd[1351]: time="2024-04-01T11:34:27.042482365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 11:34:27 ha-401500 dockerd[1351]: time="2024-04-01T11:34:27.042811363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 11:34:27 ha-401500 dockerd[1351]: time="2024-04-01T11:34:27.042861363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:34:27 ha-401500 dockerd[1351]: time="2024-04-01T11:34:27.043165061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:34:27 ha-401500 cri-dockerd[1236]: time="2024-04-01T11:34:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b061bd4ee58e57f9d7d8730401159795cc67f7ee12f8ba91a863233ca44c1931/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 01 11:34:28 ha-401500 cri-dockerd[1236]: time="2024-04-01T11:34:28Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 01 11:34:28 ha-401500 dockerd[1351]: time="2024-04-01T11:34:28.723032022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 11:34:28 ha-401500 dockerd[1351]: time="2024-04-01T11:34:28.723267320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 11:34:28 ha-401500 dockerd[1351]: time="2024-04-01T11:34:28.723289220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:34:28 ha-401500 dockerd[1351]: time="2024-04-01T11:34:28.724557008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 11:35:34 ha-401500 dockerd[1345]: 2024/04/01 11:35:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 11:35:34 ha-401500 dockerd[1345]: 2024/04/01 11:35:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 11:35:34 ha-401500 dockerd[1345]: 2024/04/01 11:35:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 11:35:34 ha-401500 dockerd[1345]: 2024/04/01 11:35:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 11:35:34 ha-401500 dockerd[1345]: 2024/04/01 11:35:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 11:35:34 ha-401500 dockerd[1345]: 2024/04/01 11:35:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 11:35:34 ha-401500 dockerd[1345]: 2024/04/01 11:35:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 11:35:34 ha-401500 dockerd[1345]: 2024/04/01 11:35:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5f0f2a70ea86       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago      Running             busybox                   0                   b061bd4ee58e5       busybox-7fdf7869d9-f5xk7
	7060906f8cfb4       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   9ec0ebe3869c2       storage-provisioner
	019f28c8ae9c2       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   4e22619d4f531       coredns-76f75df574-4xvlf
	5cf28c4d18269       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   953d3ea584fb7       coredns-76f75df574-vjslq
	6b3a35c1df165       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Running             kindnet-cni               0                   35c87d7595587       kindnet-v22wx
	3b771f391aa27       a1d263b5dc5b0                                                                                         26 minutes ago      Running             kube-proxy                0                   52e412ee73928       kube-proxy-hqcpv
	55b7d7fcbecfb       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     27 minutes ago      Running             kube-vip                  0                   6b07eb59f148c       kube-vip-ha-401500
	c01764f3eda1e       39f995c9f1996                                                                                         27 minutes ago      Running             kube-apiserver            0                   73e0584affcfd       kube-apiserver-ha-401500
	2fcf6eff5adbe       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   ac7bd8f02839f       etcd-ha-401500
	d563352b33191       6052a25da3f97                                                                                         27 minutes ago      Running             kube-controller-manager   0                   8ea839602f322       kube-controller-manager-ha-401500
	57c210811c209       8c390d98f50c0                                                                                         27 minutes ago      Running             kube-scheduler            0                   c3c232b9bbe6f       kube-scheduler-ha-401500
	
	
	==> coredns [019f28c8ae9c] <==
	[INFO] 10.244.2.2:54522 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001980082s
	[INFO] 10.244.2.2:59489 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.115079073s
	[INFO] 10.244.1.2:50252 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.044782091s
	[INFO] 10.244.0.4:43900 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033609794s
	[INFO] 10.244.0.4:39947 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249898s
	[INFO] 10.244.0.4:35641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000488095s
	[INFO] 10.244.2.2:35908 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000280598s
	[INFO] 10.244.2.2:41756 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.018302235s
	[INFO] 10.244.2.2:52057 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000398296s
	[INFO] 10.244.2.2:41041 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164798s
	[INFO] 10.244.1.2:56971 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108999s
	[INFO] 10.244.1.2:56448 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000185098s
	[INFO] 10.244.1.2:34570 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000332597s
	[INFO] 10.244.0.4:35168 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190098s
	[INFO] 10.244.2.2:52214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130898s
	[INFO] 10.244.2.2:52209 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160898s
	[INFO] 10.244.2.2:53111 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158099s
	[INFO] 10.244.2.2:39428 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140698s
	[INFO] 10.244.1.2:57304 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135399s
	[INFO] 10.244.0.4:60345 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000274498s
	[INFO] 10.244.2.2:48660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134799s
	[INFO] 10.244.2.2:39169 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102699s
	[INFO] 10.244.1.2:33430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101899s
	[INFO] 10.244.1.2:51884 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000672s
	[INFO] 10.244.1.2:45317 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000577s
	
	
	==> coredns [5cf28c4d1826] <==
	[INFO] 10.244.0.4:57108 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000265898s
	[INFO] 10.244.0.4:58394 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.030456323s
	[INFO] 10.244.0.4:43973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212498s
	[INFO] 10.244.0.4:51457 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123599s
	[INFO] 10.244.2.2:38721 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.110294596s
	[INFO] 10.244.2.2:54063 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000233198s
	[INFO] 10.244.2.2:47298 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060999s
	[INFO] 10.244.2.2:53583 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131999s
	[INFO] 10.244.1.2:34615 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078199s
	[INFO] 10.244.1.2:35189 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000067899s
	[INFO] 10.244.1.2:36491 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000254598s
	[INFO] 10.244.1.2:52312 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099599s
	[INFO] 10.244.1.2:34153 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000213299s
	[INFO] 10.244.0.4:43708 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130699s
	[INFO] 10.244.0.4:41927 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000197298s
	[INFO] 10.244.0.4:37456 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000638s
	[INFO] 10.244.1.2:33504 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000539595s
	[INFO] 10.244.1.2:58378 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126299s
	[INFO] 10.244.1.2:56306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000631s
	[INFO] 10.244.0.4:60083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234898s
	[INFO] 10.244.0.4:42878 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215198s
	[INFO] 10.244.0.4:53304 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306097s
	[INFO] 10.244.2.2:44856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102499s
	[INFO] 10.244.2.2:55794 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110099s
	[INFO] 10.244.1.2:43653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000248898s
	
	
	==> describe nodes <==
	Name:               ha-401500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-401500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=ha-401500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T11_23_29_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 11:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 11:50:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 11:49:52 +0000   Mon, 01 Apr 2024 11:23:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 11:49:52 +0000   Mon, 01 Apr 2024 11:23:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 11:49:52 +0000   Mon, 01 Apr 2024 11:23:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 11:49:52 +0000   Mon, 01 Apr 2024 11:23:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.153.73
	  Hostname:    ha-401500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 51422a693e5d4c32850905b4a00e3c09
	  System UUID:                5ddecb87-f7c6-5c44-af78-64f197febc43
	  Boot ID:                    80ab2a7b-f7d8-4389-970c-c35c9af0e0bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-f5xk7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 coredns-76f75df574-4xvlf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-76f75df574-vjslq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-401500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-v22wx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-401500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-401500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-hqcpv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-401500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-401500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-401500 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-401500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-401500 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-401500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-401500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-401500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-401500 event: Registered Node ha-401500 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-401500 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-401500 event: Registered Node ha-401500 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-401500 event: Registered Node ha-401500 in Controller
	
	
	Name:               ha-401500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-401500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=ha-401500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T11_27_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 11:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 11:50:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 11:50:17 +0000   Mon, 01 Apr 2024 11:27:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 11:50:17 +0000   Mon, 01 Apr 2024 11:27:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 11:50:17 +0000   Mon, 01 Apr 2024 11:27:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 11:50:17 +0000   Mon, 01 Apr 2024 11:27:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.149.50
	  Hostname:    ha-401500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3811c44e7a264a1ea0a703dad5809815
	  System UUID:                6b38e67a-6be9-c344-89c9-dafa56ee053a
	  Boot ID:                    b4385542-57cb-4255-b5f6-eb5d30702515
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-q7xs6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 etcd-ha-401500-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-92s2r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-401500-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-401500-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-28zds                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-401500-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-401500-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-401500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-401500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-401500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-401500-m02 event: Registered Node ha-401500-m02 in Controller
	  Normal  NodeReady                22m                kubelet          Node ha-401500-m02 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-401500-m02 event: Registered Node ha-401500-m02 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-401500-m02 event: Registered Node ha-401500-m02 in Controller
	
	
	Name:               ha-401500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-401500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=ha-401500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T11_31_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 11:31:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 11:50:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 11:49:54 +0000   Mon, 01 Apr 2024 11:31:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 11:49:54 +0000   Mon, 01 Apr 2024 11:31:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 11:49:54 +0000   Mon, 01 Apr 2024 11:31:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 11:49:54 +0000   Mon, 01 Apr 2024 11:31:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.145.208
	  Hostname:    ha-401500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 17b921dbe6774dc4ba1f49208575ffe0
	  System UUID:                dfcb8064-0682-e848-ac60-5df21a749ba5
	  Boot ID:                    f6f66cd0-32ce-4c94-b291-e8fa4bc868dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-gr89z                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 etcd-ha-401500-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-8f8ts                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-401500-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-401500-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-ccgpw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-401500-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-401500-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-401500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-401500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-401500-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node ha-401500-m03 event: Registered Node ha-401500-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-401500-m03 event: Registered Node ha-401500-m03 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-401500-m03 event: Registered Node ha-401500-m03 in Controller
	
	
	Name:               ha-401500-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-401500-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=ha-401500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T11_38_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 11:38:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401500-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 11:50:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 11:49:43 +0000   Mon, 01 Apr 2024 11:38:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 11:49:43 +0000   Mon, 01 Apr 2024 11:38:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 11:49:43 +0000   Mon, 01 Apr 2024 11:38:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 11:49:43 +0000   Mon, 01 Apr 2024 11:39:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.144.10
	  Hostname:    ha-401500-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6ffaf184aa74db980a314ce14c230c8
	  System UUID:                e78e85a1-837d-e943-baa1-989079b82f2d
	  Boot ID:                    f98da4cb-00a4-4021-8205-b435b0b005a5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9l9zs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-zqk5d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-401500-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-401500-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-401500-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-401500-m04 event: Registered Node ha-401500-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-401500-m04 event: Registered Node ha-401500-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-401500-m04 event: Registered Node ha-401500-m04 in Controller
	  Normal  NodeReady                11m                kubelet          Node ha-401500-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.197690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 1 11:22] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.188352] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[ +32.951072] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.134308] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.586944] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.201745] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.232104] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +2.872685] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.237073] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.228434] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.313386] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[Apr 1 11:23] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.124610] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.741406] systemd-fstab-generator[1541]: Ignoring "noauto" option for root device
	[  +6.547189] systemd-fstab-generator[1803]: Ignoring "noauto" option for root device
	[  +0.117853] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.186350] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.832357] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[ +13.740676] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.034429] kauditd_printk_skb: 29 callbacks suppressed
	[Apr 1 11:27] kauditd_printk_skb: 35 callbacks suppressed
	[Apr 1 11:43] hrtimer: interrupt took 752401 ns
	
	
	==> etcd [2fcf6eff5adb] <==
	{"level":"info","ts":"2024-04-01T11:39:04.046048Z","caller":"traceutil/trace.go:171","msg":"trace[337727651] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2926; }","duration":"426.426678ms","start":"2024-04-01T11:39:03.619587Z","end":"2024-04-01T11:39:04.046014Z","steps":["trace[337727651] 'agreement among raft nodes before linearized reading'  (duration: 413.1818ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T11:39:04.046314Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T11:39:03.619572Z","time spent":"426.730578ms","remote":"127.0.0.1:41688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":457,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"warn","ts":"2024-04-01T11:39:04.053874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"433.381767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-401500-m04\" ","response":"range_response_count:1 size:2892"}
	{"level":"info","ts":"2024-04-01T11:39:04.053979Z","caller":"traceutil/trace.go:171","msg":"trace[1244721105] range","detail":"{range_begin:/registry/minions/ha-401500-m04; range_end:; response_count:1; response_revision:2928; }","duration":"433.492767ms","start":"2024-04-01T11:39:03.620464Z","end":"2024-04-01T11:39:04.053957Z","steps":["trace[1244721105] 'agreement among raft nodes before linearized reading'  (duration: 433.374667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T11:39:04.054009Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T11:39:03.620457Z","time spent":"433.544667ms","remote":"127.0.0.1:41576","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":2914,"request content":"key:\"/registry/minions/ha-401500-m04\" "}
	{"level":"warn","ts":"2024-04-01T11:39:10.490307Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"dc7977933efe4c2a","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"103.59143ms"}
	{"level":"warn","ts":"2024-04-01T11:39:10.490504Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"6676f72c374aa517","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"103.793929ms"}
	{"level":"warn","ts":"2024-04-01T11:39:10.581322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"439.736919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-01T11:39:10.581447Z","caller":"traceutil/trace.go:171","msg":"trace[1764504813] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2947; }","duration":"439.934118ms","start":"2024-04-01T11:39:10.141499Z","end":"2024-04-01T11:39:10.581433Z","steps":["trace[1764504813] 'range keys from in-memory index tree'  (duration: 438.109122ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T11:39:10.581555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T11:39:10.141482Z","time spent":"439.991118ms","remote":"127.0.0.1:41570","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1132,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-04-01T11:39:10.582012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.416088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-01T11:39:10.582252Z","caller":"traceutil/trace.go:171","msg":"trace[1473803791] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2947; }","duration":"395.621787ms","start":"2024-04-01T11:39:10.186519Z","end":"2024-04-01T11:39:10.582141Z","steps":["trace[1473803791] 'range keys from in-memory index tree'  (duration: 394.07699ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T11:39:10.582556Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T11:39:10.186467Z","time spent":"395.944987ms","remote":"127.0.0.1:41688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":456,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"info","ts":"2024-04-01T11:39:10.583217Z","caller":"traceutil/trace.go:171","msg":"trace[64746757] transaction","detail":"{read_only:false; response_revision:2948; number_of_response:1; }","duration":"349.275959ms","start":"2024-04-01T11:39:10.233895Z","end":"2024-04-01T11:39:10.583171Z","steps":["trace[64746757] 'process raft request'  (duration: 309.37822ms)","trace[64746757] 'compare'  (duration: 37.944142ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T11:39:10.583767Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T11:39:10.233875Z","time spent":"349.745058ms","remote":"127.0.0.1:41688","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":524,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-401500\" mod_revision:2899 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-401500\" value_size:474 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-401500\" > >"}
	{"level":"warn","ts":"2024-04-01T11:39:15.497037Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.962507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T11:39:15.497231Z","caller":"traceutil/trace.go:171","msg":"trace[741896053] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2967; }","duration":"133.204707ms","start":"2024-04-01T11:39:15.36401Z","end":"2024-04-01T11:39:15.497215Z","steps":["trace[741896053] 'range keys from in-memory index tree'  (duration: 131.696609ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T11:39:16.090548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.509723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:4 size:17113"}
	{"level":"info","ts":"2024-04-01T11:39:16.090629Z","caller":"traceutil/trace.go:171","msg":"trace[123971384] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:4; response_revision:2968; }","duration":"192.628023ms","start":"2024-04-01T11:39:15.897987Z","end":"2024-04-01T11:39:16.090615Z","steps":["trace[123971384] 'range keys from in-memory index tree'  (duration: 190.725927ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T11:43:21.237791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2732}
	{"level":"info","ts":"2024-04-01T11:43:21.28206Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2732,"took":"43.503512ms","hash":1496351119,"current-db-size-bytes":3440640,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":2285568,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-01T11:43:21.282271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1496351119,"revision":2732,"compact-revision":1896}
	{"level":"info","ts":"2024-04-01T11:48:21.264618Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3585}
	{"level":"info","ts":"2024-04-01T11:48:21.312246Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":3585,"took":"46.993356ms","hash":992865577,"current-db-size-bytes":3440640,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":2076672,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-04-01T11:48:21.312386Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":992865577,"revision":3585,"compact-revision":2732}
	
	
	==> kernel <==
	 11:50:32 up 29 min,  0 users,  load average: 0.98, 0.50, 0.43
	Linux ha-401500 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6b3a35c1df16] <==
	I0401 11:50:02.281246       1 main.go:250] Node ha-401500-m04 has CIDR [10.244.3.0/24] 
	I0401 11:50:12.294231       1 main.go:223] Handling node with IPs: map[172.19.153.73:{}]
	I0401 11:50:12.294337       1 main.go:227] handling current node
	I0401 11:50:12.294354       1 main.go:223] Handling node with IPs: map[172.19.149.50:{}]
	I0401 11:50:12.294364       1 main.go:250] Node ha-401500-m02 has CIDR [10.244.1.0/24] 
	I0401 11:50:12.294906       1 main.go:223] Handling node with IPs: map[172.19.145.208:{}]
	I0401 11:50:12.294943       1 main.go:250] Node ha-401500-m03 has CIDR [10.244.2.0/24] 
	I0401 11:50:12.295254       1 main.go:223] Handling node with IPs: map[172.19.144.10:{}]
	I0401 11:50:12.295341       1 main.go:250] Node ha-401500-m04 has CIDR [10.244.3.0/24] 
	I0401 11:50:22.314770       1 main.go:223] Handling node with IPs: map[172.19.153.73:{}]
	I0401 11:50:22.315031       1 main.go:227] handling current node
	I0401 11:50:22.315851       1 main.go:223] Handling node with IPs: map[172.19.149.50:{}]
	I0401 11:50:22.315957       1 main.go:250] Node ha-401500-m02 has CIDR [10.244.1.0/24] 
	I0401 11:50:22.316393       1 main.go:223] Handling node with IPs: map[172.19.145.208:{}]
	I0401 11:50:22.316411       1 main.go:250] Node ha-401500-m03 has CIDR [10.244.2.0/24] 
	I0401 11:50:22.316646       1 main.go:223] Handling node with IPs: map[172.19.144.10:{}]
	I0401 11:50:22.316658       1 main.go:250] Node ha-401500-m04 has CIDR [10.244.3.0/24] 
	I0401 11:50:32.333734       1 main.go:223] Handling node with IPs: map[172.19.153.73:{}]
	I0401 11:50:32.333783       1 main.go:227] handling current node
	I0401 11:50:32.333797       1 main.go:223] Handling node with IPs: map[172.19.149.50:{}]
	I0401 11:50:32.333804       1 main.go:250] Node ha-401500-m02 has CIDR [10.244.1.0/24] 
	I0401 11:50:32.333955       1 main.go:223] Handling node with IPs: map[172.19.145.208:{}]
	I0401 11:50:32.333967       1 main.go:250] Node ha-401500-m03 has CIDR [10.244.2.0/24] 
	I0401 11:50:32.334240       1 main.go:223] Handling node with IPs: map[172.19.144.10:{}]
	I0401 11:50:32.334271       1 main.go:250] Node ha-401500-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c01764f3eda1] <==
	I0401 11:23:26.567024       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 11:23:26.917294       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 11:23:28.464351       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 11:23:28.488431       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 11:23:28.531752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 11:23:40.495640       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0401 11:23:40.839168       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0401 11:30:42.589017       1 trace.go:236] Trace[1540639200]: "Update" accept:application/json, */*,audit-id:44480bd1-3ac5-479c-9676-96bf1c895e58,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (01-Apr-2024 11:30:42.016) (total time: 572ms):
	Trace[1540639200]: ["GuaranteedUpdate etcd3" audit-id:44480bd1-3ac5-479c-9676-96bf1c895e58,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 571ms (11:30:42.017)
	Trace[1540639200]:  ---"Txn call completed" 570ms (11:30:42.588)]
	Trace[1540639200]: [572.011955ms] [572.011955ms] END
	I0401 11:30:57.241001       1 trace.go:236] Trace[1094372534]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.153.73,type:*v1.Endpoints,resource:apiServerIPInfo (01-Apr-2024 11:30:56.541) (total time: 699ms):
	Trace[1094372534]: ---"Transaction prepared" 290ms (11:30:56.841)
	Trace[1094372534]: ---"Txn call completed" 399ms (11:30:57.240)
	Trace[1094372534]: [699.175936ms] [699.175936ms] END
	E0401 11:31:31.719000       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0401 11:31:31.719158       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0401 11:31:31.719481       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 146.002µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0401 11:31:31.721029       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0401 11:31:31.721293       1 timeout.go:142] post-timeout activity - time-elapsed: 2.35833ms, PATCH "/api/v1/namespaces/default/events/ha-401500-m03.17c224a2ffeae3ac" result: <nil>
	I0401 11:31:37.100572       1 trace.go:236] Trace[1790901363]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.153.73,type:*v1.Endpoints,resource:apiServerIPInfo (01-Apr-2024 11:31:36.543) (total time: 556ms):
	Trace[1790901363]: ---"initial value restored" 255ms (11:31:36.799)
	Trace[1790901363]: ---"Transaction prepared" 193ms (11:31:36.993)
	Trace[1790901363]: ---"Txn call completed" 106ms (11:31:37.100)
	Trace[1790901363]: [556.538778ms] [556.538778ms] END
	
	
	==> kube-controller-manager [d563352b3319] <==
	I0401 11:34:26.491768       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-hfdfm"
	I0401 11:34:26.525192       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-6cnqv"
	I0401 11:34:26.601862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="291.262082ms"
	I0401 11:34:26.659363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.093824ms"
	I0401 11:34:26.660047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="144.299µs"
	I0401 11:34:26.768706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="36.217961ms"
	I0401 11:34:26.769545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="80.6µs"
	I0401 11:34:27.949641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="87.499µs"
	I0401 11:34:28.899535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="94.19232ms"
	I0401 11:34:28.899757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="131.698µs"
	I0401 11:34:29.050229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="65.261491ms"
	I0401 11:34:29.050456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="65.099µs"
	I0401 11:34:29.397821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.797822ms"
	I0401 11:34:29.397893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.3µs"
	I0401 11:38:58.686181       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-401500-m04\" does not exist"
	I0401 11:38:58.732021       1 range_allocator.go:380] "Set node PodCIDR" node="ha-401500-m04" podCIDRs=["10.244.3.0/24"]
	I0401 11:38:58.763858       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cd52j"
	I0401 11:38:58.764033       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zqk5d"
	I0401 11:38:58.969731       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-fh7sr"
	I0401 11:38:59.082113       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-shjj4"
	I0401 11:38:59.144591       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-cd52j"
	I0401 11:38:59.166944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-h2flh"
	I0401 11:39:00.564895       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-401500-m04"
	I0401 11:39:00.565477       1 event.go:376] "Event occurred" object="ha-401500-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-401500-m04 event: Registered Node ha-401500-m04 in Controller"
	I0401 11:39:21.235441       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-401500-m04"
	
	
	==> kube-proxy [3b771f391aa2] <==
	I0401 11:23:42.238608       1 server_others.go:72] "Using iptables proxy"
	I0401 11:23:42.258527       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.153.73"]
	I0401 11:23:42.454730       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 11:23:42.454794       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 11:23:42.454828       1 server_others.go:168] "Using iptables Proxier"
	I0401 11:23:42.460899       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 11:23:42.462244       1 server.go:865] "Version info" version="v1.29.3"
	I0401 11:23:42.462365       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 11:23:42.468342       1 config.go:97] "Starting endpoint slice config controller"
	I0401 11:23:42.468458       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 11:23:42.468664       1 config.go:188] "Starting service config controller"
	I0401 11:23:42.468747       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 11:23:42.469475       1 config.go:315] "Starting node config controller"
	I0401 11:23:42.469930       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 11:23:42.479975       1 shared_informer.go:318] Caches are synced for node config
	I0401 11:23:42.569506       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 11:23:42.569520       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [57c210811c20] <==
	W0401 11:23:25.227548       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 11:23:25.227646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 11:23:25.229694       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 11:23:25.229883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0401 11:23:25.279270       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 11:23:25.279369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 11:23:25.421887       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 11:23:25.422345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 11:23:25.496793       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 11:23:25.497047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 11:23:25.499516       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 11:23:25.500167       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 11:23:25.591673       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 11:23:25.591922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 11:23:25.605803       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 11:23:25.605836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0401 11:23:28.596668       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0401 11:31:31.057365       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8f8ts\": pod kindnet-8f8ts is already assigned to node \"ha-401500-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8f8ts" node="ha-401500-m03"
	E0401 11:31:31.057806       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod bd227165-7098-4498-8ba6-6f903edfef84(kube-system/kindnet-8f8ts) wasn't assumed so cannot be forgotten"
	E0401 11:31:31.058211       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8f8ts\": pod kindnet-8f8ts is already assigned to node \"ha-401500-m03\"" pod="kube-system/kindnet-8f8ts"
	I0401 11:31:31.058243       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8f8ts" node="ha-401500-m03"
	E0401 11:31:31.072658       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ccgpw\": pod kube-proxy-ccgpw is already assigned to node \"ha-401500-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ccgpw" node="ha-401500-m03"
	E0401 11:31:31.072838       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod e8debcf2-d756-4fc4-9931-102b1eef4ee5(kube-system/kube-proxy-ccgpw) wasn't assumed so cannot be forgotten"
	E0401 11:31:31.072895       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ccgpw\": pod kube-proxy-ccgpw is already assigned to node \"ha-401500-m03\"" pod="kube-system/kube-proxy-ccgpw"
	I0401 11:31:31.073021       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ccgpw" node="ha-401500-m03"
	
	
	==> kubelet <==
	Apr 01 11:46:28 ha-401500 kubelet[2862]: E0401 11:46:28.858746    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:46:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:46:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:46:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:46:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:47:28 ha-401500 kubelet[2862]: E0401 11:47:28.860969    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:47:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:47:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:47:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:47:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:48:28 ha-401500 kubelet[2862]: E0401 11:48:28.860618    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:48:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:48:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:48:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:48:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:49:28 ha-401500 kubelet[2862]: E0401 11:49:28.864888    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:49:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:49:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:49:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:49:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 11:50:28 ha-401500 kubelet[2862]: E0401 11:50:28.862359    2862 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 11:50:28 ha-401500 kubelet[2862]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 11:50:28 ha-401500 kubelet[2862]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 11:50:28 ha-401500 kubelet[2862]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 11:50:28 ha-401500 kubelet[2862]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 11:50:23.066583    7016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-401500 -n ha-401500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-401500 -n ha-401500: (13.0131996s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-401500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (598.90s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (234.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-965600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0401 12:23:23.466327    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-965600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: exit status 90 (3m41.4868595s)

                                                
                                                
-- stdout --
	* [multinode-965600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "multinode-965600" primary control-plane node in "multinode-965600" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:22:29.569568    6744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:22:29.642202    6744 out.go:291] Setting OutFile to fd 772 ...
	I0401 12:22:29.642202    6744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:22:29.642942    6744 out.go:304] Setting ErrFile to fd 576...
	I0401 12:22:29.642942    6744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:22:29.665829    6744 out.go:298] Setting JSON to false
	I0401 12:22:29.669506    6744 start.go:129] hostinfo: {"hostname":"minikube6","uptime":316907,"bootTime":1711657241,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 12:22:29.669506    6744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 12:22:29.675062    6744 out.go:177] * [multinode-965600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 12:22:29.678771    6744 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:22:29.678771    6744 notify.go:220] Checking for updates...
	I0401 12:22:29.682473    6744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 12:22:29.684725    6744 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 12:22:29.690348    6744 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 12:22:29.693688    6744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 12:22:29.697500    6744 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:22:29.697896    6744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 12:22:35.357242    6744 out.go:177] * Using the hyperv driver based on user configuration
	I0401 12:22:35.363055    6744 start.go:297] selected driver: hyperv
	I0401 12:22:35.363055    6744 start.go:901] validating driver "hyperv" against <nil>
	I0401 12:22:35.363055    6744 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 12:22:35.412780    6744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 12:22:35.414086    6744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 12:22:35.414225    6744 cni.go:84] Creating CNI manager for ""
	I0401 12:22:35.414225    6744 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0401 12:22:35.414225    6744 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 12:22:35.414225    6744 start.go:340] cluster config:
	{Name:multinode-965600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-965600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:22:35.414225    6744 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 12:22:35.419370    6744 out.go:177] * Starting "multinode-965600" primary control-plane node in "multinode-965600" cluster
	I0401 12:22:35.422611    6744 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 12:22:35.422775    6744 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 12:22:35.422830    6744 cache.go:56] Caching tarball of preloaded images
	I0401 12:22:35.423240    6744 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 12:22:35.423409    6744 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 12:22:35.423767    6744 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\config.json ...
	I0401 12:22:35.424098    6744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\config.json: {Name:mk2a48a17d652d4da2e49bc61b2bcf95641dfdb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:22:35.425086    6744 start.go:360] acquireMachinesLock for multinode-965600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 12:22:35.425086    6744 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-965600"
	I0401 12:22:35.425086    6744 start.go:93] Provisioning new machine with config: &{Name:multinode-965600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-965600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 12:22:35.425731    6744 start.go:125] createHost starting for "" (driver="hyperv")
	I0401 12:22:35.428185    6744 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 12:22:35.428534    6744 start.go:159] libmachine.API.Create for "multinode-965600" (driver="hyperv")
	I0401 12:22:35.428638    6744 client.go:168] LocalClient.Create starting
	I0401 12:22:35.428638    6744 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 12:22:35.429415    6744 main.go:141] libmachine: Decoding PEM data...
	I0401 12:22:35.429482    6744 main.go:141] libmachine: Parsing certificate...
	I0401 12:22:35.429750    6744 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 12:22:35.429854    6744 main.go:141] libmachine: Decoding PEM data...
	I0401 12:22:35.429854    6744 main.go:141] libmachine: Parsing certificate...
	I0401 12:22:35.430145    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 12:22:37.657277    6744 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 12:22:37.657277    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:22:37.657874    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 12:22:39.500456    6744 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 12:22:39.501066    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:22:39.501128    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 12:22:41.079362    6744 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 12:22:41.079362    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:22:41.079619    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 12:22:45.006033    6744 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 12:22:45.006033    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:22:45.008835    6744 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 12:22:45.549507    6744 main.go:141] libmachine: Creating SSH key...
	I0401 12:22:45.642974    6744 main.go:141] libmachine: Creating VM...
	I0401 12:22:45.642974    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 12:22:48.672011    6744 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 12:22:48.672011    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:22:48.672224    6744 main.go:141] libmachine: Using switch "Default Switch"
	I0401 12:22:48.672224    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 12:22:50.514383    6744 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 12:22:50.514383    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:22:50.514383    6744 main.go:141] libmachine: Creating VHD
	I0401 12:22:50.515125    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 12:22:54.373820    6744 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3424467E-5085-4298-9A05-476EF2BEB173
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 12:22:54.374086    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:22:54.374086    6744 main.go:141] libmachine: Writing magic tar header
	I0401 12:22:54.374086    6744 main.go:141] libmachine: Writing SSH key tar header
	I0401 12:22:54.384990    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 12:22:57.645094    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:22:57.645094    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:22:57.645094    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\disk.vhd' -SizeBytes 20000MB
	I0401 12:23:00.255766    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:00.255766    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:00.256859    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-965600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0401 12:23:04.035783    6744 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-965600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 12:23:04.035783    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:04.036509    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-965600 -DynamicMemoryEnabled $false
	I0401 12:23:06.408839    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:06.408839    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:06.409057    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-965600 -Count 2
	I0401 12:23:08.780063    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:08.780063    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:08.781109    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-965600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\boot2docker.iso'
	I0401 12:23:11.596136    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:11.596246    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:11.596246    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-965600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\disk.vhd'
	I0401 12:23:14.471225    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:14.471888    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:14.471888    6744 main.go:141] libmachine: Starting VM...
	I0401 12:23:14.471888    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-965600
	I0401 12:23:17.746283    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:17.747271    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:17.747271    6744 main.go:141] libmachine: Waiting for host to start...
	I0401 12:23:17.747451    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:23:20.151676    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:23:20.151676    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:20.151676    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:23:22.827830    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:22.827830    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:23.830271    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:23:26.157821    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:23:26.158779    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:26.158881    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:23:28.799803    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:28.799803    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:29.814695    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:23:32.126508    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:23:32.126585    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:32.126657    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:23:34.840275    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:34.840275    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:35.846033    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:23:38.204025    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:23:38.204579    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:38.204650    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:23:40.883517    6744 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:23:40.883517    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:41.898354    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:23:44.287148    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:23:44.288023    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:44.288023    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:23:47.047833    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:23:47.048680    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:47.048762    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:23:49.307388    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:23:49.308412    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:49.308444    6744 machine.go:94] provisionDockerMachine start ...
	I0401 12:23:49.308444    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:23:51.546322    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:23:51.546322    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:51.547403    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:23:54.171963    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:23:54.172601    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:54.178926    6744 main.go:141] libmachine: Using SSH client type: native
	I0401 12:23:54.188637    6744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.177 22 <nil> <nil>}
	I0401 12:23:54.188637    6744 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 12:23:54.317967    6744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 12:23:54.318106    6744 buildroot.go:166] provisioning hostname "multinode-965600"
	I0401 12:23:54.318222    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:23:56.557672    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:23:56.557672    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:56.558709    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:23:59.225497    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:23:59.225497    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:23:59.231304    6744 main.go:141] libmachine: Using SSH client type: native
	I0401 12:23:59.231514    6744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.177 22 <nil> <nil>}
	I0401 12:23:59.231514    6744 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-965600 && echo "multinode-965600" | sudo tee /etc/hostname
	I0401 12:23:59.389737    6744 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-965600
	
	I0401 12:23:59.389737    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:01.629412    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:01.630381    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:01.630437    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:04.297761    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:04.297761    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:04.305053    6744 main.go:141] libmachine: Using SSH client type: native
	I0401 12:24:04.305609    6744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.177 22 <nil> <nil>}
	I0401 12:24:04.305609    6744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-965600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-965600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-965600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 12:24:04.446435    6744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 12:24:04.446435    6744 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 12:24:04.446435    6744 buildroot.go:174] setting up certificates
	I0401 12:24:04.446435    6744 provision.go:84] configureAuth start
	I0401 12:24:04.446435    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:06.667599    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:06.667635    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:06.667635    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:09.338417    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:09.339521    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:09.339554    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:11.600330    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:11.600330    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:11.600330    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:14.341390    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:14.342403    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:14.342403    6744 provision.go:143] copyHostCerts
	I0401 12:24:14.342620    6744 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 12:24:14.343227    6744 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 12:24:14.343256    6744 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 12:24:14.343256    6744 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 12:24:14.344719    6744 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 12:24:14.344719    6744 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 12:24:14.344994    6744 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 12:24:14.345239    6744 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 12:24:14.345998    6744 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 12:24:14.345998    6744 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 12:24:14.345998    6744 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 12:24:14.346774    6744 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 12:24:14.347991    6744 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-965600 san=[127.0.0.1 172.19.151.177 localhost minikube multinode-965600]
	I0401 12:24:14.548605    6744 provision.go:177] copyRemoteCerts
	I0401 12:24:14.561552    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 12:24:14.561714    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:16.802807    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:16.802865    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:16.802865    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:19.450670    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:19.450889    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:19.451361    6744 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:24:19.550410    6744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9887341s)
	I0401 12:24:19.550410    6744 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 12:24:19.551707    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 12:24:19.599540    6744 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 12:24:19.600728    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0401 12:24:19.647481    6744 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 12:24:19.673296    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 12:24:19.721647    6744 provision.go:87] duration metric: took 15.2751052s to configureAuth
	I0401 12:24:19.721647    6744 buildroot.go:189] setting minikube options for container-runtime
	I0401 12:24:19.721647    6744 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:24:19.722232    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:21.924675    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:21.924958    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:21.924958    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:24.607615    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:24.608160    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:24.617271    6744 main.go:141] libmachine: Using SSH client type: native
	I0401 12:24:24.617271    6744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.177 22 <nil> <nil>}
	I0401 12:24:24.617271    6744 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 12:24:24.752204    6744 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 12:24:24.752277    6744 buildroot.go:70] root file system type: tmpfs
	I0401 12:24:24.752524    6744 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 12:24:24.752576    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:27.010862    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:27.011674    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:27.011674    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:29.712912    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:29.713724    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:29.719977    6744 main.go:141] libmachine: Using SSH client type: native
	I0401 12:24:29.720774    6744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.177 22 <nil> <nil>}
	I0401 12:24:29.720774    6744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 12:24:29.900631    6744 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 12:24:29.900631    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:32.129029    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:32.129062    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:32.129130    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:34.754759    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:34.755393    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:34.761031    6744 main.go:141] libmachine: Using SSH client type: native
	I0401 12:24:34.761929    6744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.177 22 <nil> <nil>}
	I0401 12:24:34.761929    6744 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 12:24:36.937974    6744 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 12:24:36.938037    6744 machine.go:97] duration metric: took 47.6292593s to provisionDockerMachine
	I0401 12:24:36.938099    6744 client.go:171] duration metric: took 2m1.5085483s to LocalClient.Create
	I0401 12:24:36.938099    6744 start.go:167] duration metric: took 2m1.5087141s to libmachine.API.Create "multinode-965600"
	I0401 12:24:36.938165    6744 start.go:293] postStartSetup for "multinode-965600" (driver="hyperv")
	I0401 12:24:36.938219    6744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 12:24:36.951665    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 12:24:36.951665    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:39.249410    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:39.250439    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:39.250521    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:42.013139    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:42.013139    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:42.013599    6744 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:24:42.129750    6744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1780491s)
	I0401 12:24:42.142509    6744 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 12:24:42.150352    6744 command_runner.go:130] > NAME=Buildroot
	I0401 12:24:42.150414    6744 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 12:24:42.150414    6744 command_runner.go:130] > ID=buildroot
	I0401 12:24:42.150414    6744 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 12:24:42.150414    6744 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 12:24:42.150414    6744 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 12:24:42.150414    6744 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 12:24:42.151161    6744 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 12:24:42.151923    6744 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 12:24:42.151923    6744 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 12:24:42.167047    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 12:24:42.188760    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 12:24:42.243873    6744 start.go:296] duration metric: took 5.3055865s for postStartSetup
	I0401 12:24:42.246569    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:44.530768    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:44.531567    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:44.531567    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:47.213734    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:47.213734    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:47.213734    6744 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\config.json ...
	I0401 12:24:47.217102    6744 start.go:128] duration metric: took 2m11.7904485s to createHost
	I0401 12:24:47.217213    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:49.452485    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:49.452485    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:49.452485    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:52.091776    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:52.091776    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:52.097536    6744 main.go:141] libmachine: Using SSH client type: native
	I0401 12:24:52.098365    6744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.177 22 <nil> <nil>}
	I0401 12:24:52.098365    6744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 12:24:52.225743    6744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711974292.226430780
	
	I0401 12:24:52.226271    6744 fix.go:216] guest clock: 1711974292.226430780
	I0401 12:24:52.226271    6744 fix.go:229] Guest: 2024-04-01 12:24:52.22643078 +0000 UTC Remote: 2024-04-01 12:24:47.2171024 +0000 UTC m=+137.750415701 (delta=5.00932838s)
	I0401 12:24:52.226406    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:54.438092    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:54.438308    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:54.438308    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:24:57.116015    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:24:57.116015    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:57.122899    6744 main.go:141] libmachine: Using SSH client type: native
	I0401 12:24:57.123604    6744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.177 22 <nil> <nil>}
	I0401 12:24:57.123604    6744 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711974292
	I0401 12:24:57.267125    6744 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 12:24:52 UTC 2024
	
	I0401 12:24:57.268308    6744 fix.go:236] clock set: Mon Apr  1 12:24:52 UTC 2024
	 (err=<nil>)
	I0401 12:24:57.268308    6744 start.go:83] releasing machines lock for "multinode-965600", held for 2m21.8422291s
	I0401 12:24:57.268561    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:24:59.491319    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:24:59.492078    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:24:59.492078    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:25:02.188440    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:25:02.188583    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:25:02.193228    6744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 12:25:02.193387    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:25:02.203214    6744 ssh_runner.go:195] Run: cat /version.json
	I0401 12:25:02.203214    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:25:04.543195    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:25:04.543195    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:25:04.543195    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:25:04.548180    6744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:25:04.548180    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:25:04.548180    6744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:25:07.377423    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:25:07.377423    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:25:07.378491    6744 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:25:07.403000    6744 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:25:07.403000    6744 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:25:07.404101    6744 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:25:07.574164    6744 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 12:25:07.574164    6744 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 12:25:07.574164    6744 ssh_runner.go:235] Completed: cat /version.json: (5.3709122s)
	I0401 12:25:07.574164    6744 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3808104s)
	I0401 12:25:07.588494    6744 ssh_runner.go:195] Run: systemctl --version
	I0401 12:25:07.596502    6744 command_runner.go:130] > systemd 252 (252)
	I0401 12:25:07.596502    6744 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 12:25:07.612010    6744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 12:25:07.622978    6744 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 12:25:07.623009    6744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 12:25:07.641102    6744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 12:25:07.673424    6744 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0401 12:25:07.673592    6744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 12:25:07.673592    6744 start.go:494] detecting cgroup driver to use...
	I0401 12:25:07.673592    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:25:07.712628    6744 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0401 12:25:07.725381    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 12:25:07.762846    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 12:25:07.788898    6744 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 12:25:07.806632    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 12:25:07.845852    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:25:07.880874    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 12:25:07.918703    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:25:07.953832    6744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 12:25:07.987689    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 12:25:08.022361    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 12:25:08.062569    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 12:25:08.099552    6744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 12:25:08.119550    6744 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 12:25:08.133194    6744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 12:25:08.167219    6744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:25:08.401598    6744 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 12:25:08.445139    6744 start.go:494] detecting cgroup driver to use...
	I0401 12:25:08.458781    6744 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 12:25:08.491377    6744 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0401 12:25:08.491377    6744 command_runner.go:130] > [Unit]
	I0401 12:25:08.491377    6744 command_runner.go:130] > Description=Docker Application Container Engine
	I0401 12:25:08.491377    6744 command_runner.go:130] > Documentation=https://docs.docker.com
	I0401 12:25:08.491377    6744 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0401 12:25:08.491377    6744 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0401 12:25:08.491377    6744 command_runner.go:130] > StartLimitBurst=3
	I0401 12:25:08.491377    6744 command_runner.go:130] > StartLimitIntervalSec=60
	I0401 12:25:08.491377    6744 command_runner.go:130] > [Service]
	I0401 12:25:08.491377    6744 command_runner.go:130] > Type=notify
	I0401 12:25:08.491377    6744 command_runner.go:130] > Restart=on-failure
	I0401 12:25:08.491377    6744 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0401 12:25:08.491377    6744 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0401 12:25:08.491377    6744 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0401 12:25:08.491377    6744 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0401 12:25:08.491377    6744 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0401 12:25:08.491377    6744 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0401 12:25:08.491377    6744 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0401 12:25:08.491377    6744 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0401 12:25:08.491377    6744 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0401 12:25:08.491377    6744 command_runner.go:130] > ExecStart=
	I0401 12:25:08.491377    6744 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0401 12:25:08.491377    6744 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0401 12:25:08.491377    6744 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0401 12:25:08.491377    6744 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0401 12:25:08.491377    6744 command_runner.go:130] > LimitNOFILE=infinity
	I0401 12:25:08.491377    6744 command_runner.go:130] > LimitNPROC=infinity
	I0401 12:25:08.491377    6744 command_runner.go:130] > LimitCORE=infinity
	I0401 12:25:08.491377    6744 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0401 12:25:08.491377    6744 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0401 12:25:08.491377    6744 command_runner.go:130] > TasksMax=infinity
	I0401 12:25:08.491377    6744 command_runner.go:130] > TimeoutStartSec=0
	I0401 12:25:08.491944    6744 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0401 12:25:08.491983    6744 command_runner.go:130] > Delegate=yes
	I0401 12:25:08.491983    6744 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0401 12:25:08.491983    6744 command_runner.go:130] > KillMode=process
	I0401 12:25:08.492034    6744 command_runner.go:130] > [Install]
	I0401 12:25:08.492034    6744 command_runner.go:130] > WantedBy=multi-user.target
	I0401 12:25:08.505324    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:25:08.546734    6744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 12:25:08.610495    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:25:08.650061    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:25:08.692088    6744 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 12:25:08.764092    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:25:08.793886    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:25:08.832920    6744 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0401 12:25:08.846349    6744 ssh_runner.go:195] Run: which cri-dockerd
	I0401 12:25:08.852625    6744 command_runner.go:130] > /usr/bin/cri-dockerd
	I0401 12:25:08.864724    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 12:25:08.884745    6744 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 12:25:08.933719    6744 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 12:25:09.162150    6744 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 12:25:09.397327    6744 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 12:25:09.397327    6744 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 12:25:09.451099    6744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:25:09.669842    6744 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 12:26:10.814702    6744 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0401 12:26:10.815320    6744 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0401 12:26:10.815598    6744 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1446682s)
	I0401 12:26:10.828199    6744 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 12:26:10.855326    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	I0401 12:26:10.855326    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.412256688Z" level=info msg="Starting up"
	I0401 12:26:10.855401    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.413442194Z" level=info msg="containerd not running, starting managed containerd"
	I0401 12:26:10.855401    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.414700801Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0401 12:26:10.855401    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.449417875Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 12:26:10.855469    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479431326Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 12:26:10.855469    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479582126Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 12:26:10.855525    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479673527Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 12:26:10.855525    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479694527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 12:26:10.855525    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479794527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 12:26:10.855581    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479842628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 12:26:10.855581    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480206129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 12:26:10.855651    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480315630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 12:26:10.855651    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480341830Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 12:26:10.855722    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480378630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 12:26:10.855785    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480560431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 12:26:10.855785    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.481088034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 12:26:10.855842    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484471951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 12:26:10.855881    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484612152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 12:26:10.855943    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484814453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 12:26:10.855985    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484910153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 12:26:10.856026    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485195554Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 12:26:10.856064    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485356555Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 12:26:10.856064    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485401855Z" level=info msg="metadata content store policy set" policy=shared
	I0401 12:26:10.856113    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515297906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 12:26:10.856150    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515438206Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 12:26:10.856150    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515466606Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 12:26:10.856198    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515488807Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 12:26:10.856198    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515523307Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 12:26:10.856238    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515739808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 12:26:10.856271    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516277410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 12:26:10.856307    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516524012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 12:26:10.856307    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516684313Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 12:26:10.856355    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516732413Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 12:26:10.856355    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516768113Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 12:26:10.856430    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516785613Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 12:26:10.856430    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516801813Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 12:26:10.856484    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516818413Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 12:26:10.856484    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516836613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 12:26:10.856546    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516859213Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 12:26:10.856546    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516877513Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 12:26:10.856602    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516892414Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 12:26:10.856664    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516916214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856664    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516933814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856714    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516949514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856714    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517005214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856751    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517026614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856751    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517042814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856831    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517058314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856831    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517073914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856831    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517089815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856921    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517108615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856970    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517123915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517139015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517153115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517172015Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517196015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517416516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517439216Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517739518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517786018Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517802518Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517816818Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517929619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517952319Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.518011319Z" level=info msg="NRI interface is disabled by configuration."
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519124625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519253925Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519345226Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519856428Z" level=info msg="containerd successfully booted in 0.072151s"
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.490349751Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.524263610Z" level=info msg="Loading containers: start."
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.797575615Z" level=info msg="Loading containers: done."
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.825802392Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.826064394Z" level=info msg="Daemon has completed initialization"
	I0401 12:26:10.856984    6744 command_runner.go:130] > Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.936118186Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 12:26:10.857536    6744 command_runner.go:130] > Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.936406288Z" level=info msg="API listen on [::]:2376"
	I0401 12:26:10.857608    6744 command_runner.go:130] > Apr 01 12:24:36 multinode-965600 systemd[1]: Started Docker Application Container Engine.
	I0401 12:26:10.857712    6744 command_runner.go:130] > Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.698986840Z" level=info msg="Processing signal 'terminated'"
	I0401 12:26:10.857752    6744 command_runner.go:130] > Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.700929543Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 12:26:10.857752    6744 command_runner.go:130] > Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701829045Z" level=info msg="Daemon shutdown complete"
	I0401 12:26:10.857807    6744 command_runner.go:130] > Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701902445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 12:26:10.857845    6744 command_runner.go:130] > Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701932945Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 12:26:10.857845    6744 command_runner.go:130] > Apr 01 12:25:09 multinode-965600 systemd[1]: Stopping Docker Application Container Engine...
	I0401 12:26:10.857845    6744 command_runner.go:130] > Apr 01 12:25:10 multinode-965600 systemd[1]: docker.service: Deactivated successfully.
	I0401 12:26:10.857845    6744 command_runner.go:130] > Apr 01 12:25:10 multinode-965600 systemd[1]: Stopped Docker Application Container Engine.
	I0401 12:26:10.857845    6744 command_runner.go:130] > Apr 01 12:25:10 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	I0401 12:26:10.857845    6744 command_runner.go:130] > Apr 01 12:25:10 multinode-965600 dockerd[1017]: time="2024-04-01T12:25:10.787255864Z" level=info msg="Starting up"
	I0401 12:26:10.857845    6744 command_runner.go:130] > Apr 01 12:26:10 multinode-965600 dockerd[1017]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0401 12:26:10.858091    6744 command_runner.go:130] > Apr 01 12:26:10 multinode-965600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0401 12:26:10.858190    6744 command_runner.go:130] > Apr 01 12:26:10 multinode-965600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0401 12:26:10.858321    6744 command_runner.go:130] > Apr 01 12:26:10 multinode-965600 systemd[1]: Failed to start Docker Application Container Engine.
	I0401 12:26:10.870646    6744 out.go:177] 
	W0401 12:26:10.876091    6744 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 12:24:35 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.412256688Z" level=info msg="Starting up"
	Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.413442194Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.414700801Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.449417875Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479431326Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479582126Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479673527Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479694527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479794527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479842628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480206129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480315630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480341830Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480378630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480560431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.481088034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484471951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484612152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484814453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484910153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485195554Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485356555Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485401855Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515297906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515438206Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515466606Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515488807Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515523307Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515739808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516277410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516524012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516684313Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516732413Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516768113Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516785613Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516801813Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516818413Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516836613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516859213Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516877513Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516892414Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516916214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516933814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516949514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517005214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517026614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517042814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517058314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517073914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517089815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517108615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517123915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517139015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517153115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517172015Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517196015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517416516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517439216Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517739518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517786018Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517802518Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517816818Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517929619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517952319Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.518011319Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519124625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519253925Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519345226Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519856428Z" level=info msg="containerd successfully booted in 0.072151s"
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.490349751Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.524263610Z" level=info msg="Loading containers: start."
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.797575615Z" level=info msg="Loading containers: done."
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.825802392Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.826064394Z" level=info msg="Daemon has completed initialization"
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.936118186Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.936406288Z" level=info msg="API listen on [::]:2376"
	Apr 01 12:24:36 multinode-965600 systemd[1]: Started Docker Application Container Engine.
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.698986840Z" level=info msg="Processing signal 'terminated'"
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.700929543Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701829045Z" level=info msg="Daemon shutdown complete"
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701902445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701932945Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 12:25:09 multinode-965600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 12:25:10 multinode-965600 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 12:25:10 multinode-965600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 12:25:10 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:25:10 multinode-965600 dockerd[1017]: time="2024-04-01T12:25:10.787255864Z" level=info msg="Starting up"
	Apr 01 12:26:10 multinode-965600 dockerd[1017]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 12:26:10 multinode-965600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 12:26:10 multinode-965600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 12:26:10 multinode-965600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 12:24:35 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.412256688Z" level=info msg="Starting up"
	Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.413442194Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 12:24:35 multinode-965600 dockerd[662]: time="2024-04-01T12:24:35.414700801Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.449417875Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479431326Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479582126Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479673527Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479694527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479794527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.479842628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480206129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480315630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480341830Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480378630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.480560431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.481088034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484471951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484612152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484814453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.484910153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485195554Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485356555Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.485401855Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515297906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515438206Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515466606Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515488807Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515523307Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.515739808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516277410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516524012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516684313Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516732413Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516768113Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516785613Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516801813Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516818413Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516836613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516859213Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516877513Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516892414Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516916214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516933814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.516949514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517005214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517026614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517042814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517058314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517073914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517089815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517108615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517123915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517139015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517153115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517172015Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517196015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517416516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517439216Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517739518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517786018Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517802518Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517816818Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517929619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.517952319Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.518011319Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519124625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519253925Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519345226Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 12:24:35 multinode-965600 dockerd[668]: time="2024-04-01T12:24:35.519856428Z" level=info msg="containerd successfully booted in 0.072151s"
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.490349751Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.524263610Z" level=info msg="Loading containers: start."
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.797575615Z" level=info msg="Loading containers: done."
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.825802392Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.826064394Z" level=info msg="Daemon has completed initialization"
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.936118186Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 12:24:36 multinode-965600 dockerd[662]: time="2024-04-01T12:24:36.936406288Z" level=info msg="API listen on [::]:2376"
	Apr 01 12:24:36 multinode-965600 systemd[1]: Started Docker Application Container Engine.
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.698986840Z" level=info msg="Processing signal 'terminated'"
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.700929543Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701829045Z" level=info msg="Daemon shutdown complete"
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701902445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 12:25:09 multinode-965600 dockerd[662]: time="2024-04-01T12:25:09.701932945Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 12:25:09 multinode-965600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 12:25:10 multinode-965600 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 12:25:10 multinode-965600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 12:25:10 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:25:10 multinode-965600 dockerd[1017]: time="2024-04-01T12:25:10.787255864Z" level=info msg="Starting up"
	Apr 01 12:26:10 multinode-965600 dockerd[1017]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 12:26:10 multinode-965600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 12:26:10 multinode-965600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 12:26:10 multinode-965600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 12:26:10.876326    6744 out.go:239] * 
	* 
	W0401 12:26:10.877725    6744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 12:26:10.880505    6744 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-965600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.6613195s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:26:11.316348    2620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:26:23.767860    2620 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (234.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (108.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (422.0607ms)

                                                
                                                
** stderr ** 
	W0401 12:26:23.962951    9064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: cluster "multinode-965600" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- rollout status deployment/busybox: exit status 1 (394.5982ms)

                                                
                                                
** stderr ** 
	W0401 12:26:24.383826   14208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (405.6843ms)

                                                
                                                
** stderr ** 
	W0401 12:26:24.779787    4436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (382.3258ms)

                                                
                                                
** stderr ** 
	W0401 12:26:26.520875    5632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (375.1491ms)

                                                
                                                
** stderr ** 
	W0401 12:26:28.845542    1704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (367.9168ms)

                                                
                                                
** stderr ** 
	W0401 12:26:30.669849    9928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (382.3489ms)

                                                
                                                
** stderr ** 
	W0401 12:26:35.269287    8484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (389.5728ms)

                                                
                                                
** stderr ** 
	W0401 12:26:38.318496    3936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (388.165ms)

                                                
                                                
** stderr ** 
	W0401 12:26:45.023548    8668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (375.0611ms)

                                                
                                                
** stderr ** 
	W0401 12:26:54.269762    7084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (390.9096ms)

                                                
                                                
** stderr ** 
	W0401 12:27:12.932938   13436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (373.0351ms)

                                                
                                                
** stderr ** 
	W0401 12:27:30.792283   13912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (387.1592ms)

                                                
                                                
** stderr ** 
	W0401 12:27:57.609380    9420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (378.3348ms)

                                                
                                                
** stderr ** 
	W0401 12:27:57.982655    9704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- exec  -- nslookup kubernetes.io: exit status 1 (389.1022ms)

                                                
                                                
** stderr ** 
	W0401 12:27:58.368678    6596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- exec  -- nslookup kubernetes.default: exit status 1 (393.746ms)

                                                
                                                
** stderr ** 
	W0401 12:27:58.759551    6168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (375.4666ms)

                                                
                                                
** stderr ** 
	W0401 12:27:59.149657    2168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.7549172s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:27:59.523762    8516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:28:12.089364    8516 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (108.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (12.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-965600 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (378.0169ms)

                                                
                                                
** stderr ** 
	W0401 12:28:12.274010    9800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
E0401 12:28:23.479094    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.5243523s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:28:12.657406    5576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:28:24.994092    5576 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (12.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-965600 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-965600 -v 3 --alsologtostderr: exit status 103 (7.5032637s)

                                                
                                                
-- stdout --
	* The control-plane node multinode-965600 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-965600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:28:25.182559    6808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:28:25.266065    6808 out.go:291] Setting OutFile to fd 940 ...
	I0401 12:28:25.266898    6808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:28:25.266898    6808 out.go:304] Setting ErrFile to fd 788...
	I0401 12:28:25.266971    6808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:28:25.282724    6808 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:28:25.283851    6808 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:28:25.285276    6808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:28:27.535154    6808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:28:27.535553    6808 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:28:27.535719    6808 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:28:27.535919    6808 api_server.go:166] Checking apiserver status ...
	I0401 12:28:27.549804    6808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 12:28:27.549804    6808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:28:29.775218    6808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:28:29.776201    6808 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:28:29.776201    6808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:28:32.415509    6808 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:28:32.416112    6808 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:28:32.416181    6808 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:28:32.529003    6808 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.979164s)
	W0401 12:28:32.529058    6808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:28:32.532508    6808 out.go:177] * The control-plane node multinode-965600 apiserver is not running: (state=Stopped)
	I0401 12:28:32.535670    6808 out.go:177]   To start a cluster, run: "minikube start -p multinode-965600"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-965600 -v 3 --alsologtostderr" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.5900081s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:28:32.698686    9256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:28:45.087156    9256 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/AddNode (20.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (12.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-965600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-965600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (149.0335ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-965600

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-965600 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-965600 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.5007007s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:28:45.439217   12580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:28:57.748317   12580 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (12.66s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (24.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.2687919s)
multinode_test.go:166: expected profile "multinode-965600" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-401500\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-401500\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-401500\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.19.159.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.19.153.73\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.19.149.50\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.19.145.208\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"172.19.144.10\",\"Port\":0,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":fa
lse,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube6:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"Mo
untIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false},{\"Name\":\"multinode-965600\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-965600\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"Ins
ecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-965600\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepositor
y\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.19.151.177\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube6:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\
":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.6032046s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:29:10.214906    7580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:29:22.626880    7580 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/ProfileList (24.87s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (25.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status --output json --alsologtostderr: exit status 6 (12.5966006s)

                                                
                                                
-- stdout --
	{"Name":"multinode-965600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:29:22.814315    1320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:29:22.893289    1320 out.go:291] Setting OutFile to fd 888 ...
	I0401 12:29:22.893839    1320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:29:22.893839    1320 out.go:304] Setting ErrFile to fd 992...
	I0401 12:29:22.893839    1320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:29:22.911200    1320 out.go:298] Setting JSON to true
	I0401 12:29:22.911270    1320 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:29:22.911459    1320 notify.go:220] Checking for updates...
	I0401 12:29:22.912232    1320 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:29:22.912305    1320 status.go:255] checking status of multinode-965600 ...
	I0401 12:29:22.913429    1320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:29:25.175718    1320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:29:25.175718    1320 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:29:25.175718    1320 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:29:25.175718    1320 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:29:25.176602    1320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:29:27.467267    1320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:29:27.467267    1320 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:29:27.467267    1320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:29:30.142904    1320 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:29:30.142904    1320 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:29:30.142904    1320 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:29:30.159342    1320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:29:30.159342    1320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:29:32.367645    1320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:29:32.367645    1320 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:29:32.368135    1320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:29:35.052517    1320 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:29:35.052517    1320 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:29:35.054050    1320 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:29:35.157787    1320 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9983479s)
	I0401 12:29:35.170016    1320 ssh_runner.go:195] Run: systemctl --version
	I0401 12:29:35.192252    1320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0401 12:29:35.219486    1320 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:29:35.219562    1320 api_server.go:166] Checking apiserver status ...
	I0401 12:29:35.231767    1320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0401 12:29:35.259644    1320 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:29:35.259644    1320 status.go:422] multinode-965600 apiserver status = Stopped (err=<nil>)
	I0401 12:29:35.259644    1320 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-965600 status --output json --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
E0401 12:29:46.737424    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.6664356s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:29:35.408208   12912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:29:47.890924   12912 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/CopyFile (25.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (25.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 node stop m03: exit status 85 (293.3431ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:29:48.080629   10796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_b919dc3b020968087ec77f25afbb061db3e8211c_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-965600 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status: exit status 6 (12.5764937s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:29:48.368225    3428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:30:00.764233    3428 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:257: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-965600 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.7612613s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:30:00.960471    4244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:30:13.524804    4244 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StopNode (25.63s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (83.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 node start m03 -v=7 --alsologtostderr: exit status 85 (286.3176ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:30:13.711665    1664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:30:13.791472    1664 out.go:291] Setting OutFile to fd 708 ...
	I0401 12:30:13.809223    1664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:13.809223    1664 out.go:304] Setting ErrFile to fd 596...
	I0401 12:30:13.809223    1664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:13.824051    1664 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:30:13.825236    1664 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:30:13.831290    1664 out.go:177] 
	W0401 12:30:13.833393    1664 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0401 12:30:13.833919    1664 out.go:239] * 
	* 
	W0401 12:30:13.847884    1664 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 12:30:13.850400    1664 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: W0401 12:30:13.711665    1664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0401 12:30:13.791472    1664 out.go:291] Setting OutFile to fd 708 ...
I0401 12:30:13.809223    1664 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 12:30:13.809223    1664 out.go:304] Setting ErrFile to fd 596...
I0401 12:30:13.809223    1664 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 12:30:13.824051    1664 mustload.go:65] Loading cluster: multinode-965600
I0401 12:30:13.825236    1664 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0401 12:30:13.831290    1664 out.go:177] 
W0401 12:30:13.833393    1664 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0401 12:30:13.833919    1664 out.go:239] * 
* 
W0401 12:30:13.847884    1664 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_0.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_0.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0401 12:30:13.850400    1664 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-965600 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr: exit status 6 (12.5289943s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:30:14.006948    4872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:30:14.089615    4872 out.go:291] Setting OutFile to fd 908 ...
	I0401 12:30:14.090347    4872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:14.090347    4872 out.go:304] Setting ErrFile to fd 864...
	I0401 12:30:14.090347    4872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:14.108769    4872 out.go:298] Setting JSON to false
	I0401 12:30:14.108912    4872 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:30:14.108912    4872 notify.go:220] Checking for updates...
	I0401 12:30:14.109900    4872 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:30:14.109900    4872 status.go:255] checking status of multinode-965600 ...
	I0401 12:30:14.110830    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:16.379218    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:16.379218    4872 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:16.380182    4872 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:30:16.380223    4872 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:30:16.380892    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:18.642943    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:18.642943    4872 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:18.642943    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:30:21.273545    4872 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:30:21.273728    4872 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:21.273728    4872 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:30:21.288412    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:30:21.288412    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:23.511408    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:23.511581    4872 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:23.511658    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:30:26.161463    4872 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:30:26.161463    4872 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:26.161463    4872 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:30:26.265182    4872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9766384s)
	I0401 12:30:26.277960    4872 ssh_runner.go:195] Run: systemctl --version
	I0401 12:30:26.301094    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0401 12:30:26.330077    4872 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:30:26.330077    4872 api_server.go:166] Checking apiserver status ...
	I0401 12:30:26.341077    4872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0401 12:30:26.367522    4872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:30:26.367586    4872 status.go:422] multinode-965600 apiserver status = Stopped (err=<nil>)
	I0401 12:30:26.367586    4872 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr: exit status 6 (12.6397142s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:30:27.638252    9912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:30:27.718282    9912 out.go:291] Setting OutFile to fd 988 ...
	I0401 12:30:27.719314    9912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:27.719314    9912 out.go:304] Setting ErrFile to fd 940...
	I0401 12:30:27.719314    9912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:27.732993    9912 out.go:298] Setting JSON to false
	I0401 12:30:27.732993    9912 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:30:27.732993    9912 notify.go:220] Checking for updates...
	I0401 12:30:27.733879    9912 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:30:27.733879    9912 status.go:255] checking status of multinode-965600 ...
	I0401 12:30:27.734628    9912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:29.992090    9912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:29.993004    9912 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:29.993004    9912 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:30:29.993116    9912 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:30:29.993948    9912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:32.238885    9912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:32.238885    9912 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:32.238885    9912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:30:34.873627    9912 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:30:34.873800    9912 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:34.873800    9912 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:30:34.886729    9912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:30:34.886729    9912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:37.158667    9912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:37.158667    9912 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:37.158988    9912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:30:39.910984    9912 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:30:39.910984    9912 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:39.912098    9912 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:30:40.016913    9912 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1300927s)
	I0401 12:30:40.034058    9912 ssh_runner.go:195] Run: systemctl --version
	I0401 12:30:40.061235    9912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0401 12:30:40.090054    9912 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:30:40.090054    9912 api_server.go:166] Checking apiserver status ...
	I0401 12:30:40.106473    9912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0401 12:30:40.131976    9912 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:30:40.131976    9912 status.go:422] multinode-965600 apiserver status = Stopped (err=<nil>)
	I0401 12:30:40.131976    9912 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr: exit status 6 (12.57678s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:30:41.686409    1952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:30:41.773328    1952 out.go:291] Setting OutFile to fd 596 ...
	I0401 12:30:41.774302    1952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:41.774302    1952 out.go:304] Setting ErrFile to fd 920...
	I0401 12:30:41.774302    1952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:41.792509    1952 out.go:298] Setting JSON to false
	I0401 12:30:41.792509    1952 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:30:41.792509    1952 notify.go:220] Checking for updates...
	I0401 12:30:41.792509    1952 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:30:41.792509    1952 status.go:255] checking status of multinode-965600 ...
	I0401 12:30:41.793724    1952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:44.065767    1952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:44.065855    1952 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:44.065939    1952 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:30:44.065939    1952 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:30:44.066536    1952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:46.326896    1952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:46.327711    1952 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:46.327800    1952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:30:48.995866    1952 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:30:48.995866    1952 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:48.996733    1952 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:30:49.009444    1952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:30:49.009444    1952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:51.246828    1952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:51.246828    1952 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:51.246828    1952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:30:53.932878    1952 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:30:53.933131    1952 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:53.933358    1952 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:30:54.023280    1952 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0132661s)
	I0401 12:30:54.036671    1952 ssh_runner.go:195] Run: systemctl --version
	I0401 12:30:54.058962    1952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0401 12:30:54.084376    1952 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:30:54.084451    1952 api_server.go:166] Checking apiserver status ...
	I0401 12:30:54.096408    1952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0401 12:30:54.120536    1952 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:30:54.120536    1952 status.go:422] multinode-965600 apiserver status = Stopped (err=<nil>)
	I0401 12:30:54.120536    1952 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr: exit status 6 (12.5344806s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:30:56.872125    8904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:30:56.951379    8904 out.go:291] Setting OutFile to fd 728 ...
	I0401 12:30:56.952643    8904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:56.952859    8904 out.go:304] Setting ErrFile to fd 708...
	I0401 12:30:56.952859    8904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:30:56.967622    8904 out.go:298] Setting JSON to false
	I0401 12:30:56.967622    8904 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:30:56.967622    8904 notify.go:220] Checking for updates...
	I0401 12:30:56.968681    8904 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:30:56.969213    8904 status.go:255] checking status of multinode-965600 ...
	I0401 12:30:56.969356    8904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:30:59.220255    8904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:30:59.220896    8904 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:30:59.220896    8904 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:30:59.220896    8904 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:30:59.221213    8904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:31:01.485441    8904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:31:01.485441    8904 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:01.486382    8904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:31:04.192582    8904 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:31:04.192582    8904 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:04.192988    8904 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:31:04.206043    8904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:31:04.207093    8904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:31:06.426890    8904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:31:06.426890    8904 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:06.427777    8904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:31:09.061381    8904 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:31:09.061381    8904 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:09.061669    8904 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:31:09.159358    8904 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9522296s)
	I0401 12:31:09.172792    8904 ssh_runner.go:195] Run: systemctl --version
	I0401 12:31:09.195125    8904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0401 12:31:09.219788    8904 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:31:09.219861    8904 api_server.go:166] Checking apiserver status ...
	I0401 12:31:09.232595    8904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0401 12:31:09.257076    8904 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:31:09.257076    8904 status.go:422] multinode-965600 apiserver status = Stopped (err=<nil>)
	I0401 12:31:09.257076    8904 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr: exit status 6 (12.545935s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:31:11.779965   12252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:31:11.861063   12252 out.go:291] Setting OutFile to fd 764 ...
	I0401 12:31:11.862121   12252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:31:11.862121   12252 out.go:304] Setting ErrFile to fd 900...
	I0401 12:31:11.862202   12252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:31:11.877419   12252 out.go:298] Setting JSON to false
	I0401 12:31:11.877419   12252 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:31:11.877419   12252 notify.go:220] Checking for updates...
	I0401 12:31:11.877818   12252 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:31:11.877818   12252 status.go:255] checking status of multinode-965600 ...
	I0401 12:31:11.879128   12252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:31:14.138953   12252 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:31:14.139036   12252 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:14.139036   12252 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:31:14.139115   12252 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:31:14.139825   12252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:31:16.383661   12252 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:31:16.383661   12252 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:16.384040   12252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:31:19.045942   12252 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:31:19.045971   12252 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:19.046036   12252 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:31:19.060383   12252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:31:19.060383   12252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:31:21.302290   12252 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:31:21.302325   12252 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:21.302663   12252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:31:23.973241   12252 main.go:141] libmachine: [stdout =====>] : 172.19.151.177
	
	I0401 12:31:23.973241   12252 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:31:23.974554   12252 sshutil.go:53] new ssh client: &{IP:172.19.151.177 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:31:24.068781   12252 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0082553s)
	I0401 12:31:24.083572   12252 ssh_runner.go:195] Run: systemctl --version
	I0401 12:31:24.105251   12252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0401 12:31:24.131679   12252 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:31:24.131679   12252 api_server.go:166] Checking apiserver status ...
	I0401 12:31:24.147709   12252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0401 12:31:24.173570   12252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:31:24.173570   12252 status.go:422] multinode-965600 apiserver status = Stopped (err=<nil>)
	I0401 12:31:24.173570   12252 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-965600 status -v=7 --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.4755388s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:31:24.319402   14212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:31:36.617524   14212 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StartAfterStop (83.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (240.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-965600
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-965600
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-965600: (44.1952311s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-965600 --wait=true -v=8 --alsologtostderr
E0401 12:33:23.477373    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-965600 --wait=true -v=8 --alsologtostderr: exit status 90 (3m2.6580272s)

                                                
                                                
-- stdout --
	* [multinode-965600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-965600" primary control-plane node in "multinode-965600" cluster
	* Restarting existing hyperv VM for "multinode-965600" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:32:21.267716   13936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:32:21.338715   13936 out.go:291] Setting OutFile to fd 824 ...
	I0401 12:32:21.338715   13936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:32:21.338715   13936 out.go:304] Setting ErrFile to fd 840...
	I0401 12:32:21.338715   13936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:32:21.360712   13936 out.go:298] Setting JSON to false
	I0401 12:32:21.363701   13936 start.go:129] hostinfo: {"hostname":"minikube6","uptime":317499,"bootTime":1711657241,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 12:32:21.363701   13936 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 12:32:21.367701   13936 out.go:177] * [multinode-965600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 12:32:21.370712   13936 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:32:21.370712   13936 notify.go:220] Checking for updates...
	I0401 12:32:21.373706   13936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 12:32:21.376707   13936 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 12:32:21.378708   13936 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 12:32:21.380763   13936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 12:32:21.383713   13936 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:32:21.383713   13936 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 12:32:27.041824   13936 out.go:177] * Using the hyperv driver based on existing profile
	I0401 12:32:27.044697   13936 start.go:297] selected driver: hyperv
	I0401 12:32:27.044794   13936 start.go:901] validating driver "hyperv" against &{Name:multinode-965600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:multinode-965600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.151.177 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:32:27.045019   13936 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 12:32:27.097451   13936 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 12:32:27.097451   13936 cni.go:84] Creating CNI manager for ""
	I0401 12:32:27.097451   13936 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 12:32:27.097451   13936 start.go:340] cluster config:
	{Name:multinode-965600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-965600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.151.177 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:32:27.098104   13936 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 12:32:27.104703   13936 out.go:177] * Starting "multinode-965600" primary control-plane node in "multinode-965600" cluster
	I0401 12:32:27.107142   13936 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 12:32:27.107142   13936 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 12:32:27.107142   13936 cache.go:56] Caching tarball of preloaded images
	I0401 12:32:27.107142   13936 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 12:32:27.107142   13936 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 12:32:27.107142   13936 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\config.json ...
	I0401 12:32:27.109631   13936 start.go:360] acquireMachinesLock for multinode-965600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 12:32:27.109631   13936 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-965600"
	I0401 12:32:27.109631   13936 start.go:96] Skipping create...Using existing machine configuration
	I0401 12:32:27.109631   13936 fix.go:54] fixHost starting: 
	I0401 12:32:27.110647   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:32:29.948353   13936 main.go:141] libmachine: [stdout =====>] : Off
	
	I0401 12:32:29.948353   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:29.948353   13936 fix.go:112] recreateIfNeeded on multinode-965600: state=Stopped err=<nil>
	W0401 12:32:29.948913   13936 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 12:32:29.953917   13936 out.go:177] * Restarting existing hyperv VM for "multinode-965600" ...
	I0401 12:32:29.959269   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-965600
	I0401 12:32:33.177909   13936 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:32:33.177909   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:33.177909   13936 main.go:141] libmachine: Waiting for host to start...
	I0401 12:32:33.177909   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:32:35.552775   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:32:35.552775   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:35.553772   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:32:38.209526   13936 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:32:38.210096   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:39.214205   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:32:41.531565   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:32:41.531565   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:41.531749   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:32:44.185667   13936 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:32:44.185729   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:45.186774   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:32:47.517499   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:32:47.517741   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:47.517783   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:32:50.175140   13936 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:32:50.176190   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:51.187484   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:32:53.550153   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:32:53.550153   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:53.550153   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:32:56.196950   13936 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:32:56.196950   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:57.200062   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:32:59.573039   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:32:59.573039   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:32:59.573524   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:02.302211   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:02.302211   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:02.304718   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:04.509238   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:04.509734   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:04.509734   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:07.170534   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:07.170534   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:07.171747   13936 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\config.json ...
	I0401 12:33:07.175545   13936 machine.go:94] provisionDockerMachine start ...
	I0401 12:33:07.175666   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:09.446634   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:09.446634   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:09.447192   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:12.178363   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:12.178363   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:12.187601   13936 main.go:141] libmachine: Using SSH client type: native
	I0401 12:33:12.188534   13936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.156.14 22 <nil> <nil>}
	I0401 12:33:12.188579   13936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 12:33:12.317670   13936 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 12:33:12.317847   13936 buildroot.go:166] provisioning hostname "multinode-965600"
	I0401 12:33:12.317964   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:14.583155   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:14.583933   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:14.583933   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:17.286367   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:17.286367   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:17.292791   13936 main.go:141] libmachine: Using SSH client type: native
	I0401 12:33:17.293170   13936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.156.14 22 <nil> <nil>}
	I0401 12:33:17.293170   13936 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-965600 && echo "multinode-965600" | sudo tee /etc/hostname
	I0401 12:33:17.454449   13936 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-965600
	
	I0401 12:33:17.454603   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:19.692147   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:19.693338   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:19.693431   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:22.326071   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:22.326071   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:22.333227   13936 main.go:141] libmachine: Using SSH client type: native
	I0401 12:33:22.333227   13936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.156.14 22 <nil> <nil>}
	I0401 12:33:22.333227   13936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-965600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-965600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-965600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 12:33:22.483699   13936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 12:33:22.483806   13936 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 12:33:22.483806   13936 buildroot.go:174] setting up certificates
	I0401 12:33:22.483806   13936 provision.go:84] configureAuth start
	I0401 12:33:22.483927   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:24.690284   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:24.690801   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:24.690801   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:27.355913   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:27.355913   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:27.356885   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:29.605421   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:29.605421   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:29.605421   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:32.234061   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:32.234061   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:32.235049   13936 provision.go:143] copyHostCerts
	I0401 12:33:32.235366   13936 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 12:33:32.235366   13936 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 12:33:32.235366   13936 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 12:33:32.236093   13936 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 12:33:32.237386   13936 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 12:33:32.237386   13936 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 12:33:32.237386   13936 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 12:33:32.238062   13936 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 12:33:32.238910   13936 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 12:33:32.238910   13936 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 12:33:32.238910   13936 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 12:33:32.239591   13936 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 12:33:32.240343   13936 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-965600 san=[127.0.0.1 172.19.156.14 localhost minikube multinode-965600]
	I0401 12:33:32.478874   13936 provision.go:177] copyRemoteCerts
	I0401 12:33:32.491840   13936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 12:33:32.492757   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:34.727174   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:34.727174   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:34.727334   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:37.468679   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:37.468679   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:37.469855   13936 sshutil.go:53] new ssh client: &{IP:172.19.156.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:33:37.581762   13936 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0896094s)
	I0401 12:33:37.581854   13936 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 12:33:37.582248   13936 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 12:33:37.635947   13936 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 12:33:37.636078   13936 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0401 12:33:37.687607   13936 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 12:33:37.688132   13936 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 12:33:37.740862   13936 provision.go:87] duration metric: took 15.2569484s to configureAuth
	I0401 12:33:37.740862   13936 buildroot.go:189] setting minikube options for container-runtime
	I0401 12:33:37.742004   13936 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:33:37.742067   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:39.968661   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:39.969651   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:39.969651   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:42.677344   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:42.678057   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:42.684146   13936 main.go:141] libmachine: Using SSH client type: native
	I0401 12:33:42.684146   13936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.156.14 22 <nil> <nil>}
	I0401 12:33:42.684146   13936 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 12:33:42.814769   13936 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 12:33:42.814840   13936 buildroot.go:70] root file system type: tmpfs
	I0401 12:33:42.814840   13936 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 12:33:42.814840   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:45.064845   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:45.064845   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:45.065135   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:47.771310   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:47.771369   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:47.775999   13936 main.go:141] libmachine: Using SSH client type: native
	I0401 12:33:47.776696   13936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.156.14 22 <nil> <nil>}
	I0401 12:33:47.776696   13936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 12:33:47.935230   13936 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 12:33:47.935343   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:50.166102   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:50.166102   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:50.166309   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:33:52.819850   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:33:52.820278   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:52.826912   13936 main.go:141] libmachine: Using SSH client type: native
	I0401 12:33:52.826912   13936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.156.14 22 <nil> <nil>}
	I0401 12:33:52.827494   13936 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 12:33:55.043242   13936 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 12:33:55.043361   13936 machine.go:97] duration metric: took 47.8674807s to provisionDockerMachine
	I0401 12:33:55.043448   13936 start.go:293] postStartSetup for "multinode-965600" (driver="hyperv")
	I0401 12:33:55.043448   13936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 12:33:55.056597   13936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 12:33:55.057185   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:33:57.298566   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:33:57.298909   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:33:57.298993   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:34:00.030532   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:34:00.031757   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:00.032137   13936 sshutil.go:53] new ssh client: &{IP:172.19.156.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:34:00.141768   13936 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0845475s)
	I0401 12:34:00.154363   13936 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 12:34:00.161416   13936 command_runner.go:130] > NAME=Buildroot
	I0401 12:34:00.161416   13936 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 12:34:00.161416   13936 command_runner.go:130] > ID=buildroot
	I0401 12:34:00.161416   13936 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 12:34:00.161416   13936 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 12:34:00.161416   13936 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 12:34:00.161553   13936 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 12:34:00.161654   13936 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 12:34:00.162965   13936 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 12:34:00.163025   13936 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 12:34:00.175885   13936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 12:34:00.196074   13936 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 12:34:00.244196   13936 start.go:296] duration metric: took 5.2007118s for postStartSetup
	I0401 12:34:00.244196   13936 fix.go:56] duration metric: took 1m33.1339132s for fixHost
	I0401 12:34:00.244196   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:34:02.475926   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:34:02.475926   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:02.476020   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:34:05.182603   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:34:05.182603   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:05.189894   13936 main.go:141] libmachine: Using SSH client type: native
	I0401 12:34:05.190561   13936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.156.14 22 <nil> <nil>}
	I0401 12:34:05.190561   13936 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 12:34:05.326699   13936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711974845.326884273
	
	I0401 12:34:05.326759   13936 fix.go:216] guest clock: 1711974845.326884273
	I0401 12:34:05.326759   13936 fix.go:229] Guest: 2024-04-01 12:34:05.326884273 +0000 UTC Remote: 2024-04-01 12:34:00.2441966 +0000 UTC m=+99.094583001 (delta=5.082687673s)
	I0401 12:34:05.326880   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:34:07.525372   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:34:07.525372   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:07.525569   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:34:10.259604   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:34:10.259604   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:10.265915   13936 main.go:141] libmachine: Using SSH client type: native
	I0401 12:34:10.266592   13936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.156.14 22 <nil> <nil>}
	I0401 12:34:10.266592   13936 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711974845
	I0401 12:34:10.414157   13936 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 12:34:05 UTC 2024
	
	I0401 12:34:10.414157   13936 fix.go:236] clock set: Mon Apr  1 12:34:05 UTC 2024
	 (err=<nil>)
	I0401 12:34:10.414157   13936 start.go:83] releasing machines lock for "multinode-965600", held for 1m43.3038031s
	I0401 12:34:10.414806   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:34:12.645739   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:34:12.646050   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:12.646050   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:34:15.332894   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:34:15.333391   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:15.337900   13936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 12:34:15.338034   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:34:15.349618   13936 ssh_runner.go:195] Run: cat /version.json
	I0401 12:34:15.349618   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:34:17.611437   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:34:17.611437   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:17.611437   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:34:17.624042   13936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:34:17.624042   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:17.624042   13936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:34:20.409116   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:34:20.409759   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:20.409848   13936 sshutil.go:53] new ssh client: &{IP:172.19.156.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:34:20.437463   13936 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:34:20.437463   13936 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:34:20.437693   13936 sshutil.go:53] new ssh client: &{IP:172.19.156.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:34:20.505959   13936 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 12:34:20.505959   13936 ssh_runner.go:235] Completed: cat /version.json: (5.1563048s)
	I0401 12:34:20.518325   13936 ssh_runner.go:195] Run: systemctl --version
	I0401 12:34:20.613056   13936 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 12:34:20.613280   13936 command_runner.go:130] > systemd 252 (252)
	I0401 12:34:20.613280   13936 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2753435s)
	I0401 12:34:20.613280   13936 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 12:34:20.626566   13936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 12:34:20.636697   13936 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 12:34:20.637469   13936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 12:34:20.650491   13936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 12:34:20.683123   13936 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0401 12:34:20.683123   13936 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 12:34:20.683236   13936 start.go:494] detecting cgroup driver to use...
	I0401 12:34:20.683518   13936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:34:20.722709   13936 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0401 12:34:20.736928   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 12:34:20.773104   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 12:34:20.797747   13936 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 12:34:20.810754   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 12:34:20.848249   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:34:20.883768   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 12:34:20.917599   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:34:20.951008   13936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 12:34:20.983926   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 12:34:21.018879   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 12:34:21.051572   13936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 12:34:21.087298   13936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 12:34:21.109374   13936 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 12:34:21.122765   13936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 12:34:21.155682   13936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:34:21.380081   13936 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 12:34:21.416048   13936 start.go:494] detecting cgroup driver to use...
	I0401 12:34:21.429813   13936 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 12:34:21.459149   13936 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0401 12:34:21.459240   13936 command_runner.go:130] > [Unit]
	I0401 12:34:21.459240   13936 command_runner.go:130] > Description=Docker Application Container Engine
	I0401 12:34:21.459275   13936 command_runner.go:130] > Documentation=https://docs.docker.com
	I0401 12:34:21.459275   13936 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0401 12:34:21.459275   13936 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0401 12:34:21.459317   13936 command_runner.go:130] > StartLimitBurst=3
	I0401 12:34:21.459317   13936 command_runner.go:130] > StartLimitIntervalSec=60
	I0401 12:34:21.459317   13936 command_runner.go:130] > [Service]
	I0401 12:34:21.459317   13936 command_runner.go:130] > Type=notify
	I0401 12:34:21.459317   13936 command_runner.go:130] > Restart=on-failure
	I0401 12:34:21.459363   13936 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0401 12:34:21.459363   13936 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0401 12:34:21.459431   13936 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0401 12:34:21.459431   13936 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0401 12:34:21.459471   13936 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0401 12:34:21.459513   13936 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0401 12:34:21.459513   13936 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0401 12:34:21.459553   13936 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0401 12:34:21.459603   13936 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0401 12:34:21.459657   13936 command_runner.go:130] > ExecStart=
	I0401 12:34:21.459657   13936 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0401 12:34:21.459697   13936 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0401 12:34:21.459735   13936 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0401 12:34:21.459735   13936 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0401 12:34:21.459784   13936 command_runner.go:130] > LimitNOFILE=infinity
	I0401 12:34:21.459784   13936 command_runner.go:130] > LimitNPROC=infinity
	I0401 12:34:21.459823   13936 command_runner.go:130] > LimitCORE=infinity
	I0401 12:34:21.459864   13936 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0401 12:34:21.459864   13936 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0401 12:34:21.459901   13936 command_runner.go:130] > TasksMax=infinity
	I0401 12:34:21.459948   13936 command_runner.go:130] > TimeoutStartSec=0
	I0401 12:34:21.459948   13936 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0401 12:34:21.459948   13936 command_runner.go:130] > Delegate=yes
	I0401 12:34:21.459985   13936 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0401 12:34:21.459985   13936 command_runner.go:130] > KillMode=process
	I0401 12:34:21.459985   13936 command_runner.go:130] > [Install]
	I0401 12:34:21.460033   13936 command_runner.go:130] > WantedBy=multi-user.target
	I0401 12:34:21.473264   13936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:34:21.510240   13936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 12:34:21.553315   13936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:34:21.591634   13936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:34:21.627868   13936 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 12:34:21.691736   13936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:34:21.715159   13936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:34:21.752916   13936 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0401 12:34:21.765878   13936 ssh_runner.go:195] Run: which cri-dockerd
	I0401 12:34:21.773203   13936 command_runner.go:130] > /usr/bin/cri-dockerd
	I0401 12:34:21.784886   13936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 12:34:21.804584   13936 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 12:34:21.848690   13936 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 12:34:22.048508   13936 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 12:34:22.261829   13936 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 12:34:22.262215   13936 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 12:34:22.308684   13936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:34:22.535824   13936 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 12:35:23.675588   13936 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0401 12:35:23.675588   13936 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0401 12:35:23.677257   13936 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1410051s)
	I0401 12:35:23.690412   13936 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 12:35:23.715511   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	I0401 12:35:23.715511   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.488693735Z" level=info msg="Starting up"
	I0401 12:35:23.716071   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.490499340Z" level=info msg="containerd not running, starting managed containerd"
	I0401 12:35:23.716071   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.491781844Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	I0401 12:35:23.716071   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.532203369Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0401 12:35:23.716071   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562514963Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0401 12:35:23.716164   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562651764Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0401 12:35:23.716164   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562824964Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0401 12:35:23.716221   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562928364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0401 12:35:23.716221   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563446166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0401 12:35:23.716275   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563553666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0401 12:35:23.716328   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563925768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 12:35:23.716381   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564026168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0401 12:35:23.716381   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564049168Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0401 12:35:23.716381   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564072268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0401 12:35:23.716434   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564516769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.565258572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568278481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568380281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568951283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569053183Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569803386Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569914186Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569934286Z" level=info msg="metadata content store policy set" policy=shared
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571840892Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571955892Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571980693Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571997193Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0401 12:35:23.716463   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572013093Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0401 12:35:23.717004   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572206993Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0401 12:35:23.717004   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572554894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0401 12:35:23.717004   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572931995Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0401 12:35:23.717004   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572956896Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0401 12:35:23.717004   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572973596Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0401 12:35:23.717172   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573102296Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0401 12:35:23.717202   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573443997Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0401 12:35:23.717202   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573469597Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0401 12:35:23.717202   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573531597Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0401 12:35:23.717266   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573550997Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0401 12:35:23.717266   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573823298Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0401 12:35:23.717266   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573939199Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0401 12:35:23.717373   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573960999Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0401 12:35:23.717373   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573986199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.717444   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574006099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.717444   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574024399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.717554   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574040599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.717554   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574055799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.717605   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574077399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.717605   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574093899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.718810   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574108999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719040   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574126499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719040   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574144299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574159099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574173499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574189899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574209199Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574232399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574266800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574296200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574347600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574365800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574377800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574390000Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574455300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574472100Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574483100Z" level=info msg="NRI interface is disabled by configuration."
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574898902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575047202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0401 12:35:23.719116   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575120202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0401 12:35:23.719687   13936 command_runner.go:130] > Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575161002Z" level=info msg="containerd successfully booted in 0.046758s"
	I0401 12:35:23.719687   13936 command_runner.go:130] > Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.552857165Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0401 12:35:23.719687   13936 command_runner.go:130] > Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.581551349Z" level=info msg="Loading containers: start."
	I0401 12:35:23.719687   13936 command_runner.go:130] > Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.854660257Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0401 12:35:23.719786   13936 command_runner.go:130] > Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.947108549Z" level=info msg="Loading containers: done."
	I0401 12:35:23.719786   13936 command_runner.go:130] > Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.975040397Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0401 12:35:23.719786   13936 command_runner.go:130] > Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.975633300Z" level=info msg="Daemon has completed initialization"
	I0401 12:35:23.719865   13936 command_runner.go:130] > Apr 01 12:33:55 multinode-965600 dockerd[655]: time="2024-04-01T12:33:55.040824285Z" level=info msg="API listen on /var/run/docker.sock"
	I0401 12:35:23.719942   13936 command_runner.go:130] > Apr 01 12:33:55 multinode-965600 systemd[1]: Started Docker Application Container Engine.
	I0401 12:35:23.719942   13936 command_runner.go:130] > Apr 01 12:33:55 multinode-965600 dockerd[655]: time="2024-04-01T12:33:55.043394248Z" level=info msg="API listen on [::]:2376"
	I0401 12:35:23.720071   13936 command_runner.go:130] > Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.562614201Z" level=info msg="Processing signal 'terminated'"
	I0401 12:35:23.720071   13936 command_runner.go:130] > Apr 01 12:34:22 multinode-965600 systemd[1]: Stopping Docker Application Container Engine...
	I0401 12:35:23.720071   13936 command_runner.go:130] > Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564186505Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0401 12:35:23.720145   13936 command_runner.go:130] > Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564558506Z" level=info msg="Daemon shutdown complete"
	I0401 12:35:23.720145   13936 command_runner.go:130] > Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564674606Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0401 12:35:23.720234   13936 command_runner.go:130] > Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564776706Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0401 12:35:23.720234   13936 command_runner.go:130] > Apr 01 12:34:23 multinode-965600 systemd[1]: docker.service: Deactivated successfully.
	I0401 12:35:23.720234   13936 command_runner.go:130] > Apr 01 12:34:23 multinode-965600 systemd[1]: Stopped Docker Application Container Engine.
	I0401 12:35:23.720234   13936 command_runner.go:130] > Apr 01 12:34:23 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	I0401 12:35:23.720323   13936 command_runner.go:130] > Apr 01 12:34:23 multinode-965600 dockerd[1030]: time="2024-04-01T12:34:23.647768220Z" level=info msg="Starting up"
	I0401 12:35:23.720323   13936 command_runner.go:130] > Apr 01 12:35:23 multinode-965600 dockerd[1030]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0401 12:35:23.720396   13936 command_runner.go:130] > Apr 01 12:35:23 multinode-965600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0401 12:35:23.720396   13936 command_runner.go:130] > Apr 01 12:35:23 multinode-965600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0401 12:35:23.720396   13936 command_runner.go:130] > Apr 01 12:35:23 multinode-965600 systemd[1]: Failed to start Docker Application Container Engine.
	I0401 12:35:23.730057   13936 out.go:177] 
	W0401 12:35:23.732705   13936 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 12:33:53 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.488693735Z" level=info msg="Starting up"
	Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.490499340Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.491781844Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.532203369Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562514963Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562651764Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562824964Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562928364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563446166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563553666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563925768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564026168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564049168Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564072268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564516769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.565258572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568278481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568380281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568951283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569053183Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569803386Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569914186Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569934286Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571840892Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571955892Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571980693Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571997193Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572013093Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572206993Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572554894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572931995Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572956896Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572973596Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573102296Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573443997Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573469597Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573531597Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573550997Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573823298Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573939199Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573960999Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573986199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574006099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574024399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574040599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574055799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574077399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574093899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574108999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574126499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574144299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574159099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574173499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574189899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574209199Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574232399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574266800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574296200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574347600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574365800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574377800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574390000Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574455300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574472100Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574483100Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574898902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575047202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575120202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575161002Z" level=info msg="containerd successfully booted in 0.046758s"
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.552857165Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.581551349Z" level=info msg="Loading containers: start."
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.854660257Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.947108549Z" level=info msg="Loading containers: done."
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.975040397Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.975633300Z" level=info msg="Daemon has completed initialization"
	Apr 01 12:33:55 multinode-965600 dockerd[655]: time="2024-04-01T12:33:55.040824285Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 12:33:55 multinode-965600 systemd[1]: Started Docker Application Container Engine.
	Apr 01 12:33:55 multinode-965600 dockerd[655]: time="2024-04-01T12:33:55.043394248Z" level=info msg="API listen on [::]:2376"
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.562614201Z" level=info msg="Processing signal 'terminated'"
	Apr 01 12:34:22 multinode-965600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564186505Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564558506Z" level=info msg="Daemon shutdown complete"
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564674606Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564776706Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 12:34:23 multinode-965600 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 12:34:23 multinode-965600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 12:34:23 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:34:23 multinode-965600 dockerd[1030]: time="2024-04-01T12:34:23.647768220Z" level=info msg="Starting up"
	Apr 01 12:35:23 multinode-965600 dockerd[1030]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 12:35:23 multinode-965600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 12:35:23 multinode-965600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 12:35:23 multinode-965600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 12:33:53 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.488693735Z" level=info msg="Starting up"
	Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.490499340Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 12:33:53 multinode-965600 dockerd[655]: time="2024-04-01T12:33:53.491781844Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.532203369Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562514963Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562651764Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562824964Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.562928364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563446166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563553666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.563925768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564026168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564049168Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564072268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.564516769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.565258572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568278481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568380281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.568951283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569053183Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569803386Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569914186Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.569934286Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571840892Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571955892Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571980693Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.571997193Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572013093Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572206993Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572554894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572931995Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572956896Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.572973596Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573102296Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573443997Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573469597Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573531597Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573550997Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573823298Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573939199Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573960999Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.573986199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574006099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574024399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574040599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574055799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574077399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574093899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574108999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574126499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574144299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574159099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574173499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574189899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574209199Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574232399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574266800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574296200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574347600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574365800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574377800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574390000Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574455300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574472100Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574483100Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.574898902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575047202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575120202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 12:33:53 multinode-965600 dockerd[661]: time="2024-04-01T12:33:53.575161002Z" level=info msg="containerd successfully booted in 0.046758s"
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.552857165Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.581551349Z" level=info msg="Loading containers: start."
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.854660257Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.947108549Z" level=info msg="Loading containers: done."
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.975040397Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 12:33:54 multinode-965600 dockerd[655]: time="2024-04-01T12:33:54.975633300Z" level=info msg="Daemon has completed initialization"
	Apr 01 12:33:55 multinode-965600 dockerd[655]: time="2024-04-01T12:33:55.040824285Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 12:33:55 multinode-965600 systemd[1]: Started Docker Application Container Engine.
	Apr 01 12:33:55 multinode-965600 dockerd[655]: time="2024-04-01T12:33:55.043394248Z" level=info msg="API listen on [::]:2376"
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.562614201Z" level=info msg="Processing signal 'terminated'"
	Apr 01 12:34:22 multinode-965600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564186505Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564558506Z" level=info msg="Daemon shutdown complete"
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564674606Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 12:34:22 multinode-965600 dockerd[655]: time="2024-04-01T12:34:22.564776706Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 12:34:23 multinode-965600 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 12:34:23 multinode-965600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 12:34:23 multinode-965600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:34:23 multinode-965600 dockerd[1030]: time="2024-04-01T12:34:23.647768220Z" level=info msg="Starting up"
	Apr 01 12:35:23 multinode-965600 dockerd[1030]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 12:35:23 multinode-965600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 12:35:23 multinode-965600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 12:35:23 multinode-965600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 12:35:23.732705   13936 out.go:239] * 
	* 
	W0401 12:35:23.734974   13936 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 12:35:23.737550   13936 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-965600" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-965600
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-965600	172.19.151.177

                                                
                                                
After restart: multinode-965600	172.19.156.14
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.7021354s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:35:24.382963   10912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:35:36.901336   10912 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (240.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (33.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 node delete m03: exit status 103 (7.8508861s)

                                                
                                                
-- stdout --
	* The control-plane node multinode-965600 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-965600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:35:37.094008    6156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-windows-amd64.exe -p multinode-965600 node delete m03": exit status 103
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr: exit status 6 (12.87752s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:35:44.951008    2272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:35:45.039016    2272 out.go:291] Setting OutFile to fd 956 ...
	I0401 12:35:45.039977    2272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:35:45.040057    2272 out.go:304] Setting ErrFile to fd 816...
	I0401 12:35:45.040057    2272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:35:45.060074    2272 out.go:298] Setting JSON to false
	I0401 12:35:45.060074    2272 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:35:45.060074    2272 notify.go:220] Checking for updates...
	I0401 12:35:45.061007    2272 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:35:45.061007    2272 status.go:255] checking status of multinode-965600 ...
	I0401 12:35:45.061689    2272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:35:47.362172    2272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:35:47.362440    2272 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:35:47.362440    2272 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:35:47.362440    2272 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:35:47.363151    2272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:35:49.679280    2272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:35:49.679840    2272 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:35:49.679840    2272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:35:52.386927    2272 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:35:52.387016    2272 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:35:52.387016    2272 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:35:52.399789    2272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:35:52.399789    2272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:35:54.702176    2272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:35:54.702176    2272 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:35:54.702698    2272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:35:57.465297    2272 main.go:141] libmachine: [stdout =====>] : 172.19.156.14
	
	I0401 12:35:57.465297    2272 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:35:57.466012    2272 sshutil.go:53] new ssh client: &{IP:172.19.156.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:35:57.571279    2272 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1713689s)
	I0401 12:35:57.585269    2272 ssh_runner.go:195] Run: systemctl --version
	I0401 12:35:57.609066    2272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0401 12:35:57.636671    2272 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:35:57.636804    2272 api_server.go:166] Checking apiserver status ...
	I0401 12:35:57.648314    2272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0401 12:35:57.672257    2272 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:35:57.672351    2272 status.go:422] multinode-965600 apiserver status = Stopped (err=<nil>)
	I0401 12:35:57.672351    2272 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 6 (12.8213072s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:35:57.830095    2972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 12:36:10.456180    2972 status.go:417] kubeconfig endpoint: get endpoint: "multinode-965600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeleteNode (33.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (89.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-965600 stop: (1m22.164816s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status: exit status 7 (2.4844774s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:37:32.826226    6804 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr: exit status 7 (2.5214333s)

                                                
                                                
-- stdout --
	multinode-965600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:37:35.319049   12828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:37:35.396578   12828 out.go:291] Setting OutFile to fd 816 ...
	I0401 12:37:35.412477   12828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:37:35.412477   12828 out.go:304] Setting ErrFile to fd 988...
	I0401 12:37:35.412477   12828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:37:35.429970   12828 out.go:298] Setting JSON to false
	I0401 12:37:35.429970   12828 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:37:35.429970   12828 notify.go:220] Checking for updates...
	I0401 12:37:35.430655   12828 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:37:35.430655   12828 status.go:255] checking status of multinode-965600 ...
	I0401 12:37:35.431904   12828 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:37:37.669305   12828 main.go:141] libmachine: [stdout =====>] : Off
	
	I0401 12:37:37.669305   12828 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:37:37.669305   12828 status.go:330] multinode-965600 host status = "Stopped" (err=<nil>)
	I0401 12:37:37.669305   12828 status.go:343] host is not running, skipping remaining checks
	I0401 12:37:37.669305   12828 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr": multinode-965600
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr": multinode-965600
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: exit status 7 (2.5530312s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:37:37.833243    4720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-965600" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (89.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (234.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-965600 --wait=true -v=8 --alsologtostderr --driver=hyperv
E0401 12:38:23.471162    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-965600 --wait=true -v=8 --alsologtostderr --driver=hyperv: (3m5.5095111s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr: (12.8222747s)
multinode_test.go:388: status says both hosts are not running: args "out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr": 
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:40:45.883874    3052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:40:45.964612    3052 out.go:291] Setting OutFile to fd 672 ...
	I0401 12:40:45.965591    3052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:40:45.965591    3052 out.go:304] Setting ErrFile to fd 940...
	I0401 12:40:45.965591    3052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:40:45.978457    3052 out.go:298] Setting JSON to false
	I0401 12:40:45.978457    3052 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:40:45.978457    3052 notify.go:220] Checking for updates...
	I0401 12:40:45.979653    3052 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:40:45.979653    3052 status.go:255] checking status of multinode-965600 ...
	I0401 12:40:45.980643    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:48.282451    3052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:48.282451    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:48.282451    3052 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:40:48.282451    3052 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:40:48.283271    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:50.599753    3052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:50.599753    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:50.600375    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:40:53.282418    3052 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:40:53.282564    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:53.282564    3052 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:40:53.300529    3052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:40:53.300529    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:55.546129    3052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:55.546129    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:55.546129    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:40:58.257479    3052 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:40:58.257479    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:58.258442    3052 sshutil.go:53] new ssh client: &{IP:172.19.154.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:40:58.369565    3052 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0690008s)
	I0401 12:40:58.383146    3052 ssh_runner.go:195] Run: systemctl --version
	I0401 12:40:58.405103    3052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 12:40:58.437281    3052 kubeconfig.go:125] found "multinode-965600" server: "https://172.19.154.221:8443"
	I0401 12:40:58.437413    3052 api_server.go:166] Checking apiserver status ...
	I0401 12:40:58.449873    3052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 12:40:58.495765    3052 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2225/cgroup
	W0401 12:40:58.524543    3052 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:40:58.537427    3052 ssh_runner.go:195] Run: ls
	I0401 12:40:58.544555    3052 api_server.go:253] Checking apiserver healthz at https://172.19.154.221:8443/healthz ...
	I0401 12:40:58.552781    3052 api_server.go:279] https://172.19.154.221:8443/healthz returned 200:
	ok
	I0401 12:40:58.552781    3052 status.go:422] multinode-965600 apiserver status = Running (err=<nil>)
	I0401 12:40:58.552781    3052 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:392: status says both kubelets are not running: args "out/minikube-windows-amd64.exe -p multinode-965600 status --alsologtostderr": 
-- stdout --
	multinode-965600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:40:45.883874    3052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:40:45.964612    3052 out.go:291] Setting OutFile to fd 672 ...
	I0401 12:40:45.965591    3052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:40:45.965591    3052 out.go:304] Setting ErrFile to fd 940...
	I0401 12:40:45.965591    3052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:40:45.978457    3052 out.go:298] Setting JSON to false
	I0401 12:40:45.978457    3052 mustload.go:65] Loading cluster: multinode-965600
	I0401 12:40:45.978457    3052 notify.go:220] Checking for updates...
	I0401 12:40:45.979653    3052 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:40:45.979653    3052 status.go:255] checking status of multinode-965600 ...
	I0401 12:40:45.980643    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:48.282451    3052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:48.282451    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:48.282451    3052 status.go:330] multinode-965600 host status = "Running" (err=<nil>)
	I0401 12:40:48.282451    3052 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:40:48.283271    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:50.599753    3052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:50.599753    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:50.600375    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:40:53.282418    3052 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:40:53.282564    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:53.282564    3052 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:40:53.300529    3052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 12:40:53.300529    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:55.546129    3052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:55.546129    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:55.546129    3052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:40:58.257479    3052 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:40:58.257479    3052 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:58.258442    3052 sshutil.go:53] new ssh client: &{IP:172.19.154.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:40:58.369565    3052 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0690008s)
	I0401 12:40:58.383146    3052 ssh_runner.go:195] Run: systemctl --version
	I0401 12:40:58.405103    3052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 12:40:58.437281    3052 kubeconfig.go:125] found "multinode-965600" server: "https://172.19.154.221:8443"
	I0401 12:40:58.437413    3052 api_server.go:166] Checking apiserver status ...
	I0401 12:40:58.449873    3052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 12:40:58.495765    3052 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2225/cgroup
	W0401 12:40:58.524543    3052 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 12:40:58.537427    3052 ssh_runner.go:195] Run: ls
	I0401 12:40:58.544555    3052 api_server.go:253] Checking apiserver healthz at https://172.19.154.221:8443/healthz ...
	I0401 12:40:58.552781    3052 api_server.go:279] https://172.19.154.221:8443/healthz returned 200:
	ok
	I0401 12:40:58.552781    3052 status.go:422] multinode-965600 apiserver status = Running (err=<nil>)
	I0401 12:40:58.552781    3052 status.go:257] multinode-965600 status: &{Name:multinode-965600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:409: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: (12.7630272s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-965600 logs -n 25: (9.0254822s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- exec          | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | -- nslookup kubernetes.io            |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- exec          | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | -- nslookup kubernetes.default       |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600                  | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | -- exec  -- nslookup                 |                  |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:28 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |                |                     |                     |
	| node    | add -p multinode-965600 -v 3         | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:28 UTC |                     |
	|         | --alsologtostderr                    |                  |                   |                |                     |                     |
	| node    | multinode-965600 node stop m03       | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:29 UTC |                     |
	| node    | multinode-965600 node start          | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:30 UTC |                     |
	|         | m03 -v=7 --alsologtostderr           |                  |                   |                |                     |                     |
	| node    | list -p multinode-965600             | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:31 UTC |                     |
	| stop    | -p multinode-965600                  | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:31 UTC | 01 Apr 24 12:32 UTC |
	| start   | -p multinode-965600                  | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:32 UTC |                     |
	|         | --wait=true -v=8                     |                  |                   |                |                     |                     |
	|         | --alsologtostderr                    |                  |                   |                |                     |                     |
	| node    | list -p multinode-965600             | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:35 UTC |                     |
	| node    | multinode-965600 node delete         | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:35 UTC |                     |
	|         | m03                                  |                  |                   |                |                     |                     |
	| stop    | multinode-965600 stop                | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:36 UTC | 01 Apr 24 12:37 UTC |
	| start   | -p multinode-965600                  | multinode-965600 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:37 UTC | 01 Apr 24 12:40 UTC |
	|         | --wait=true -v=8                     |                  |                   |                |                     |                     |
	|         | --alsologtostderr                    |                  |                   |                |                     |                     |
	|         | --driver=hyperv                      |                  |                   |                |                     |                     |
	|---------|--------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 12:37:40
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 12:37:40.477571    2788 out.go:291] Setting OutFile to fd 976 ...
	I0401 12:37:40.478399    2788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:37:40.478399    2788 out.go:304] Setting ErrFile to fd 672...
	I0401 12:37:40.478399    2788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:37:40.511996    2788 out.go:298] Setting JSON to false
	I0401 12:37:40.516070    2788 start.go:129] hostinfo: {"hostname":"minikube6","uptime":317818,"bootTime":1711657241,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 12:37:40.516070    2788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 12:37:40.520084    2788 out.go:177] * [multinode-965600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 12:37:40.545219    2788 notify.go:220] Checking for updates...
	I0401 12:37:40.547708    2788 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:37:40.550493    2788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 12:37:40.553231    2788 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 12:37:40.556206    2788 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 12:37:40.558509    2788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 12:37:40.561623    2788 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:37:40.562501    2788 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 12:37:46.262263    2788 out.go:177] * Using the hyperv driver based on existing profile
	I0401 12:37:46.266193    2788 start.go:297] selected driver: hyperv
	I0401 12:37:46.266278    2788 start.go:901] validating driver "hyperv" against &{Name:multinode-965600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:multinode-965600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.14 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:37:46.266278    2788 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 12:37:46.321064    2788 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 12:37:46.321064    2788 cni.go:84] Creating CNI manager for ""
	I0401 12:37:46.321064    2788 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 12:37:46.321064    2788 start.go:340] cluster config:
	{Name:multinode-965600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-965600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.14 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:37:46.321664    2788 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 12:37:46.326638    2788 out.go:177] * Starting "multinode-965600" primary control-plane node in "multinode-965600" cluster
	I0401 12:37:46.328846    2788 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 12:37:46.328846    2788 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 12:37:46.328846    2788 cache.go:56] Caching tarball of preloaded images
	I0401 12:37:46.329389    2788 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 12:37:46.329631    2788 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 12:37:46.329700    2788 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\config.json ...
	I0401 12:37:46.331933    2788 start.go:360] acquireMachinesLock for multinode-965600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 12:37:46.331933    2788 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-965600"
	I0401 12:37:46.331933    2788 start.go:96] Skipping create...Using existing machine configuration
	I0401 12:37:46.332457    2788 fix.go:54] fixHost starting: 
	I0401 12:37:46.332903    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:37:49.219552    2788 main.go:141] libmachine: [stdout =====>] : Off
	
	I0401 12:37:49.219745    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:37:49.219745    2788 fix.go:112] recreateIfNeeded on multinode-965600: state=Stopped err=<nil>
	W0401 12:37:49.219745    2788 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 12:37:49.225562    2788 out.go:177] * Restarting existing hyperv VM for "multinode-965600" ...
	I0401 12:37:49.227825    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-965600
	I0401 12:37:52.404352    2788 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:37:52.404664    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:37:52.404664    2788 main.go:141] libmachine: Waiting for host to start...
	I0401 12:37:52.404751    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:37:54.751793    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:37:54.752214    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:37:54.752214    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:37:57.377191    2788 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:37:57.377757    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:37:58.393180    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:00.674423    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:00.675321    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:00.675386    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:03.358167    2788 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:38:03.358167    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:04.374006    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:06.693684    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:06.693911    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:06.694003    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:09.390936    2788 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:38:09.391182    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:10.405403    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:12.742269    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:12.742472    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:12.742903    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:15.382691    2788 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:38:15.382917    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:16.392115    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:18.697372    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:18.697444    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:18.697444    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:21.375078    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:38:21.375078    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:21.378191    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:23.633765    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:23.634836    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:23.634929    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:26.329411    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:38:26.329411    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:26.330438    2788 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\config.json ...
	I0401 12:38:26.333301    2788 machine.go:94] provisionDockerMachine start ...
	I0401 12:38:26.333455    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:28.592929    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:28.592929    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:28.594059    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:31.269378    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:38:31.269594    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:31.275165    2788 main.go:141] libmachine: Using SSH client type: native
	I0401 12:38:31.275974    2788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.154.221 22 <nil> <nil>}
	I0401 12:38:31.275974    2788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 12:38:31.407650    2788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 12:38:31.407723    2788 buildroot.go:166] provisioning hostname "multinode-965600"
	I0401 12:38:31.407801    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:33.613610    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:33.613610    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:33.613679    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:36.282323    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:38:36.282323    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:36.287726    2788 main.go:141] libmachine: Using SSH client type: native
	I0401 12:38:36.288452    2788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.154.221 22 <nil> <nil>}
	I0401 12:38:36.288452    2788 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-965600 && echo "multinode-965600" | sudo tee /etc/hostname
	I0401 12:38:36.471787    2788 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-965600
	
	I0401 12:38:36.472335    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:38.727555    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:38.727555    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:38.728577    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:41.365139    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:38:41.365139    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:41.371156    2788 main.go:141] libmachine: Using SSH client type: native
	I0401 12:38:41.371828    2788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.154.221 22 <nil> <nil>}
	I0401 12:38:41.371828    2788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-965600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-965600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-965600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 12:38:41.517362    2788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 12:38:41.517362    2788 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 12:38:41.517362    2788 buildroot.go:174] setting up certificates
	I0401 12:38:41.517362    2788 provision.go:84] configureAuth start
	I0401 12:38:41.517362    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:43.756787    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:43.756787    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:43.756787    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:46.436525    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:38:46.436525    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:46.436525    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:48.625834    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:48.625834    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:48.625834    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:51.316528    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:38:51.316668    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:51.316668    2788 provision.go:143] copyHostCerts
	I0401 12:38:51.316954    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0401 12:38:51.317308    2788 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 12:38:51.317308    2788 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 12:38:51.317768    2788 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 12:38:51.318799    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0401 12:38:51.319062    2788 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 12:38:51.319243    2788 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 12:38:51.319424    2788 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 12:38:51.320602    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0401 12:38:51.320684    2788 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 12:38:51.320684    2788 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 12:38:51.321303    2788 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 12:38:51.322013    2788 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-965600 san=[127.0.0.1 172.19.154.221 localhost minikube multinode-965600]
	I0401 12:38:51.652051    2788 provision.go:177] copyRemoteCerts
	I0401 12:38:51.667982    2788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 12:38:51.668545    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:53.845921    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:53.845921    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:53.846742    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:38:56.560035    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:38:56.560035    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:56.561567    2788 sshutil.go:53] new ssh client: &{IP:172.19.154.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:38:56.676574    2788 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0085569s)
	I0401 12:38:56.676574    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0401 12:38:56.677605    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 12:38:56.730385    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0401 12:38:56.730836    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0401 12:38:56.779167    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0401 12:38:56.779956    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 12:38:56.831133    2788 provision.go:87] duration metric: took 15.3136638s to configureAuth
	I0401 12:38:56.831133    2788 buildroot.go:189] setting minikube options for container-runtime
	I0401 12:38:56.831775    2788 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:38:56.831775    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:38:59.066014    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:38:59.066208    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:38:59.066280    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:01.751250    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:01.751250    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:01.757912    2788 main.go:141] libmachine: Using SSH client type: native
	I0401 12:39:01.757912    2788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.154.221 22 <nil> <nil>}
	I0401 12:39:01.758459    2788 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 12:39:01.890379    2788 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 12:39:01.890379    2788 buildroot.go:70] root file system type: tmpfs
	I0401 12:39:01.890379    2788 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 12:39:01.890913    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:39:04.143130    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:39:04.143288    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:04.143288    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:06.829090    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:06.829431    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:06.836576    2788 main.go:141] libmachine: Using SSH client type: native
	I0401 12:39:06.836990    2788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.154.221 22 <nil> <nil>}
	I0401 12:39:06.837530    2788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 12:39:06.992965    2788 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 12:39:06.993552    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:39:09.250043    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:39:09.250793    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:09.250878    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:11.973076    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:11.973076    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:11.979502    2788 main.go:141] libmachine: Using SSH client type: native
	I0401 12:39:11.980028    2788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.154.221 22 <nil> <nil>}
	I0401 12:39:11.980129    2788 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 12:39:14.185689    2788 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 12:39:14.185689    2788 machine.go:97] duration metric: took 47.8520527s to provisionDockerMachine
	I0401 12:39:14.186697    2788 start.go:293] postStartSetup for "multinode-965600" (driver="hyperv")
	I0401 12:39:14.186697    2788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 12:39:14.199714    2788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 12:39:14.199714    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:39:16.468030    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:39:16.468876    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:16.469025    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:19.175504    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:19.175504    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:19.176118    2788 sshutil.go:53] new ssh client: &{IP:172.19.154.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:39:19.282157    2788 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0824073s)
	I0401 12:39:19.297758    2788 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 12:39:19.306981    2788 command_runner.go:130] > NAME=Buildroot
	I0401 12:39:19.307137    2788 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 12:39:19.307137    2788 command_runner.go:130] > ID=buildroot
	I0401 12:39:19.307137    2788 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 12:39:19.307137    2788 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 12:39:19.307388    2788 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 12:39:19.307592    2788 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 12:39:19.309328    2788 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 12:39:19.311466    2788 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 12:39:19.311579    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /etc/ssl/certs/12602.pem
	I0401 12:39:19.325431    2788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 12:39:19.348488    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 12:39:19.403011    2788 start.go:296] duration metric: took 5.2162239s for postStartSetup
	I0401 12:39:19.403127    2788 fix.go:56] duration metric: took 1m33.0705418s for fixHost
	I0401 12:39:19.403178    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:39:21.606801    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:39:21.606801    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:21.606876    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:24.290781    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:24.290954    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:24.297295    2788 main.go:141] libmachine: Using SSH client type: native
	I0401 12:39:24.297295    2788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.154.221 22 <nil> <nil>}
	I0401 12:39:24.297835    2788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 12:39:24.436803    2788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711975164.438054295
	
	I0401 12:39:24.436803    2788 fix.go:216] guest clock: 1711975164.438054295
	I0401 12:39:24.436803    2788 fix.go:229] Guest: 2024-04-01 12:39:24.438054295 +0000 UTC Remote: 2024-04-01 12:39:19.4031271 +0000 UTC m=+99.129085901 (delta=5.034927195s)
	I0401 12:39:24.436803    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:39:26.705676    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:39:26.705676    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:26.705907    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:29.463634    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:29.463634    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:29.469550    2788 main.go:141] libmachine: Using SSH client type: native
	I0401 12:39:29.470629    2788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.154.221 22 <nil> <nil>}
	I0401 12:39:29.470629    2788 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711975164
	I0401 12:39:29.616659    2788 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 12:39:24 UTC 2024
	
	I0401 12:39:29.616659    2788 fix.go:236] clock set: Mon Apr  1 12:39:24 UTC 2024
	 (err=<nil>)
	I0401 12:39:29.616659    2788 start.go:83] releasing machines lock for "multinode-965600", held for 1m43.2840025s
	I0401 12:39:29.616659    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:39:31.861018    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:39:31.861266    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:31.861266    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:34.597831    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:34.598373    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:34.603213    2788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 12:39:34.603760    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:39:34.617260    2788 ssh_runner.go:195] Run: cat /version.json
	I0401 12:39:34.617260    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:39:36.904943    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:39:36.904943    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:36.904943    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:36.907221    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:39:36.907221    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:36.907759    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:39:39.716373    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:39.716462    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:39.717019    2788 sshutil.go:53] new ssh client: &{IP:172.19.154.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:39:39.749726    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:39:39.749726    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:39:39.750283    2788 sshutil.go:53] new ssh client: &{IP:172.19.154.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:39:39.934477    2788 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 12:39:39.934477    2788 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3312267s)
	I0401 12:39:39.934674    2788 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 12:39:39.934787    2788 ssh_runner.go:235] Completed: cat /version.json: (5.3174905s)
	I0401 12:39:39.947818    2788 ssh_runner.go:195] Run: systemctl --version
	I0401 12:39:39.958327    2788 command_runner.go:130] > systemd 252 (252)
	I0401 12:39:39.958558    2788 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 12:39:39.972292    2788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 12:39:39.982125    2788 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 12:39:39.982732    2788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 12:39:39.993969    2788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 12:39:40.024907    2788 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0401 12:39:40.025560    2788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 12:39:40.025672    2788 start.go:494] detecting cgroup driver to use...
	I0401 12:39:40.026165    2788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:39:40.064312    2788 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0401 12:39:40.077631    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 12:39:40.108450    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 12:39:40.130835    2788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 12:39:40.143673    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 12:39:40.176756    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:39:40.210903    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 12:39:40.244133    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:39:40.277170    2788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 12:39:40.311771    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 12:39:40.344455    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 12:39:40.381129    2788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 12:39:40.417039    2788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 12:39:40.437418    2788 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 12:39:40.451211    2788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 12:39:40.485865    2788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:39:40.702652    2788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 12:39:40.741264    2788 start.go:494] detecting cgroup driver to use...
	I0401 12:39:40.753502    2788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 12:39:40.778142    2788 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0401 12:39:40.778191    2788 command_runner.go:130] > [Unit]
	I0401 12:39:40.778191    2788 command_runner.go:130] > Description=Docker Application Container Engine
	I0401 12:39:40.778191    2788 command_runner.go:130] > Documentation=https://docs.docker.com
	I0401 12:39:40.778191    2788 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0401 12:39:40.778191    2788 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0401 12:39:40.778191    2788 command_runner.go:130] > StartLimitBurst=3
	I0401 12:39:40.778266    2788 command_runner.go:130] > StartLimitIntervalSec=60
	I0401 12:39:40.778294    2788 command_runner.go:130] > [Service]
	I0401 12:39:40.778294    2788 command_runner.go:130] > Type=notify
	I0401 12:39:40.778294    2788 command_runner.go:130] > Restart=on-failure
	I0401 12:39:40.778331    2788 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0401 12:39:40.778365    2788 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0401 12:39:40.778393    2788 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0401 12:39:40.778393    2788 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0401 12:39:40.778393    2788 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0401 12:39:40.778393    2788 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0401 12:39:40.778393    2788 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0401 12:39:40.778393    2788 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0401 12:39:40.778393    2788 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0401 12:39:40.778393    2788 command_runner.go:130] > ExecStart=
	I0401 12:39:40.778393    2788 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0401 12:39:40.778393    2788 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0401 12:39:40.778393    2788 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0401 12:39:40.778393    2788 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0401 12:39:40.778393    2788 command_runner.go:130] > LimitNOFILE=infinity
	I0401 12:39:40.778393    2788 command_runner.go:130] > LimitNPROC=infinity
	I0401 12:39:40.778393    2788 command_runner.go:130] > LimitCORE=infinity
	I0401 12:39:40.778393    2788 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0401 12:39:40.778393    2788 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0401 12:39:40.778393    2788 command_runner.go:130] > TasksMax=infinity
	I0401 12:39:40.778393    2788 command_runner.go:130] > TimeoutStartSec=0
	I0401 12:39:40.778393    2788 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0401 12:39:40.778393    2788 command_runner.go:130] > Delegate=yes
	I0401 12:39:40.778393    2788 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0401 12:39:40.778393    2788 command_runner.go:130] > KillMode=process
	I0401 12:39:40.778393    2788 command_runner.go:130] > [Install]
	I0401 12:39:40.778393    2788 command_runner.go:130] > WantedBy=multi-user.target
	I0401 12:39:40.790440    2788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:39:40.829452    2788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 12:39:40.882193    2788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:39:40.921856    2788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:39:40.963208    2788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 12:39:41.030506    2788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:39:41.054770    2788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:39:41.092384    2788 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0401 12:39:41.105904    2788 ssh_runner.go:195] Run: which cri-dockerd
	I0401 12:39:41.112596    2788 command_runner.go:130] > /usr/bin/cri-dockerd
	I0401 12:39:41.125244    2788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 12:39:41.146712    2788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 12:39:41.193853    2788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 12:39:41.433875    2788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 12:39:41.644475    2788 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 12:39:41.644754    2788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 12:39:41.690364    2788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:39:41.903067    2788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 12:39:44.453735    2788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5496694s)
	I0401 12:39:44.465324    2788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 12:39:44.515643    2788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 12:39:44.552583    2788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 12:39:44.778689    2788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 12:39:44.994132    2788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:39:45.217396    2788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 12:39:45.261812    2788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 12:39:45.301226    2788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:39:45.515894    2788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 12:39:45.622249    2788 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 12:39:45.635245    2788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 12:39:45.646266    2788 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0401 12:39:45.646266    2788 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0401 12:39:45.646266    2788 command_runner.go:130] > Device: 0,22	Inode: 873         Links: 1
	I0401 12:39:45.646266    2788 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0401 12:39:45.646266    2788 command_runner.go:130] > Access: 2024-04-01 12:39:45.544240591 +0000
	I0401 12:39:45.646266    2788 command_runner.go:130] > Modify: 2024-04-01 12:39:45.544240591 +0000
	I0401 12:39:45.646266    2788 command_runner.go:130] > Change: 2024-04-01 12:39:45.547240588 +0000
	I0401 12:39:45.646266    2788 command_runner.go:130] >  Birth: -
	I0401 12:39:45.646266    2788 start.go:562] Will wait 60s for crictl version
	I0401 12:39:45.660106    2788 ssh_runner.go:195] Run: which crictl
	I0401 12:39:45.667663    2788 command_runner.go:130] > /usr/bin/crictl
	I0401 12:39:45.681500    2788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 12:39:45.762159    2788 command_runner.go:130] > Version:  0.1.0
	I0401 12:39:45.762354    2788 command_runner.go:130] > RuntimeName:  docker
	I0401 12:39:45.762354    2788 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0401 12:39:45.762354    2788 command_runner.go:130] > RuntimeApiVersion:  v1
	I0401 12:39:45.762354    2788 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 12:39:45.772635    2788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 12:39:45.810002    2788 command_runner.go:130] > 26.0.0
	I0401 12:39:45.821700    2788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 12:39:45.852792    2788 command_runner.go:130] > 26.0.0
	I0401 12:39:45.855784    2788 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 12:39:45.856100    2788 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 12:39:45.860599    2788 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 12:39:45.860599    2788 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 12:39:45.860599    2788 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 12:39:45.860599    2788 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 12:39:45.863592    2788 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 12:39:45.863592    2788 ip.go:210] interface addr: 172.19.144.1/20
	I0401 12:39:45.875339    2788 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 12:39:45.882345    2788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 12:39:45.907206    2788 kubeadm.go:877] updating cluster {Name:multinode-965600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-965600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.154.221 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 12:39:45.907542    2788 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 12:39:45.917579    2788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 12:39:45.940832    2788 docker.go:685] Got preloaded images: 
	I0401 12:39:45.940832    2788 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0401 12:39:45.953611    2788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 12:39:45.972103    2788 command_runner.go:139] > {"Repositories":{}}
	I0401 12:39:45.983489    2788 ssh_runner.go:195] Run: which lz4
	I0401 12:39:45.989568    2788 command_runner.go:130] > /usr/bin/lz4
	I0401 12:39:45.989568    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0401 12:39:46.002250    2788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 12:39:46.008870    2788 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 12:39:46.010010    2788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 12:39:46.010010    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0401 12:39:47.970400    2788 docker.go:649] duration metric: took 1.9804977s to copy over tarball
	I0401 12:39:47.982931    2788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 12:39:57.413068    2788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.4300363s)
	I0401 12:39:57.413166    2788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 12:39:57.485586    2788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 12:39:57.504401    2788 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.29.3":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.29.3":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.29.3":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b
5bbe4f71784e392"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.29.3":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0401 12:39:57.504401    2788 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0401 12:39:57.556854    2788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:39:57.808839    2788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 12:40:00.666269    2788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8574105s)
	I0401 12:40:00.676260    2788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 12:40:00.707267    2788 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0401 12:40:00.707935    2788 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0401 12:40:00.707935    2788 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0401 12:40:00.707935    2788 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0401 12:40:00.707935    2788 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0401 12:40:00.707935    2788 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0401 12:40:00.707935    2788 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0401 12:40:00.707935    2788 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 12:40:00.708048    2788 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0401 12:40:00.708048    2788 cache_images.go:84] Images are preloaded, skipping loading
	I0401 12:40:00.708127    2788 kubeadm.go:928] updating node { 172.19.154.221 8443 v1.29.3 docker true true} ...
	I0401 12:40:00.708375    2788 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-965600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.154.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-965600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 12:40:00.718225    2788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0401 12:40:00.752189    2788 command_runner.go:130] > cgroupfs
	I0401 12:40:00.752528    2788 cni.go:84] Creating CNI manager for ""
	I0401 12:40:00.752528    2788 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 12:40:00.752528    2788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 12:40:00.752654    2788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.154.221 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-965600 NodeName:multinode-965600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.154.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.154.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 12:40:00.752810    2788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.154.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-965600"
	  kubeletExtraArgs:
	    node-ip: 172.19.154.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.154.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 12:40:00.765504    2788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 12:40:00.784228    2788 command_runner.go:130] > kubeadm
	I0401 12:40:00.784228    2788 command_runner.go:130] > kubectl
	I0401 12:40:00.784228    2788 command_runner.go:130] > kubelet
	I0401 12:40:00.784575    2788 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 12:40:00.796477    2788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 12:40:00.816355    2788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0401 12:40:00.849737    2788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 12:40:00.882952    2788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0401 12:40:00.928942    2788 ssh_runner.go:195] Run: grep 172.19.154.221	control-plane.minikube.internal$ /etc/hosts
	I0401 12:40:00.935424    2788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.154.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 12:40:00.969465    2788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:40:01.192973    2788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 12:40:01.222717    2788 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600 for IP: 172.19.154.221
	I0401 12:40:01.222717    2788 certs.go:194] generating shared ca certs ...
	I0401 12:40:01.222717    2788 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:01.223988    2788 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 12:40:01.224488    2788 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 12:40:01.224546    2788 certs.go:256] generating profile certs ...
	I0401 12:40:01.225496    2788 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\client.key
	I0401 12:40:01.225625    2788 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\client.crt with IP's: []
	I0401 12:40:01.451679    2788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\client.crt ...
	I0401 12:40:01.452699    2788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\client.crt: {Name:mkd63ff2385cd1a5d4514feecab2bc6d1e09ead9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:01.453701    2788 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\client.key ...
	I0401 12:40:01.453701    2788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\client.key: {Name:mka7312f21fd6cbe3540ed313d598818cb9e55d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:01.454684    2788 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.key.72d20314
	I0401 12:40:01.455441    2788 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.crt.72d20314 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.154.221]
	I0401 12:40:01.596271    2788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.crt.72d20314 ...
	I0401 12:40:01.596271    2788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.crt.72d20314: {Name:mke12aecfa1a1f146d8c885083475108d6c5cb81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:01.597081    2788 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.key.72d20314 ...
	I0401 12:40:01.598080    2788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.key.72d20314: {Name:mk473cf9bc543bfef386b9660838d108b813a5ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:01.599094    2788 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.crt.72d20314 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.crt
	I0401 12:40:01.611481    2788 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.key.72d20314 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.key
	I0401 12:40:01.611785    2788 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.key
	I0401 12:40:01.612778    2788 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.crt with IP's: []
	I0401 12:40:01.945201    2788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.crt ...
	I0401 12:40:01.945201    2788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.crt: {Name:mkc8a17836013241d93f5294cab6fbe984988468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:01.946883    2788 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.key ...
	I0401 12:40:01.946883    2788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.key: {Name:mk4a5911d06dc8d99c08911299a5cf9c41364a4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:01.948206    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 12:40:01.948395    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0401 12:40:01.948608    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 12:40:01.948742    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 12:40:01.948950    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 12:40:01.949112    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 12:40:01.949306    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 12:40:01.958513    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 12:40:01.959564    2788 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem (1338 bytes)
	W0401 12:40:01.959564    2788 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260_empty.pem, impossibly tiny 0 bytes
	I0401 12:40:01.960378    2788 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 12:40:01.960544    2788 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 12:40:01.960544    2788 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 12:40:01.961295    2788 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 12:40:01.961538    2788 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem (1708 bytes)
	I0401 12:40:01.961538    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem -> /usr/share/ca-certificates/1260.pem
	I0401 12:40:01.962237    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> /usr/share/ca-certificates/12602.pem
	I0401 12:40:01.962366    2788 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 12:40:01.962531    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 12:40:02.015620    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 12:40:02.064963    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 12:40:02.114637    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 12:40:02.162763    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 12:40:02.213033    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 12:40:02.271518    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 12:40:02.317999    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 12:40:02.364884    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem --> /usr/share/ca-certificates/1260.pem (1338 bytes)
	I0401 12:40:02.419935    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /usr/share/ca-certificates/12602.pem (1708 bytes)
	I0401 12:40:02.478449    2788 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 12:40:02.530715    2788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 12:40:02.583054    2788 ssh_runner.go:195] Run: openssl version
	I0401 12:40:02.592101    2788 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0401 12:40:02.606656    2788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1260.pem && ln -fs /usr/share/ca-certificates/1260.pem /etc/ssl/certs/1260.pem"
	I0401 12:40:02.647703    2788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1260.pem
	I0401 12:40:02.655138    2788 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 12:40:02.655276    2788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 12:40:02.667028    2788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1260.pem
	I0401 12:40:02.676260    2788 command_runner.go:130] > 51391683
	I0401 12:40:02.688214    2788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1260.pem /etc/ssl/certs/51391683.0"
	I0401 12:40:02.721204    2788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0401 12:40:02.754273    2788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0401 12:40:02.762010    2788 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 12:40:02.762219    2788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 12:40:02.777214    2788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0401 12:40:02.785791    2788 command_runner.go:130] > 3ec20f2e
	I0401 12:40:02.799655    2788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 12:40:02.832027    2788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 12:40:02.862465    2788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 12:40:02.870496    2788 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 12:40:02.870835    2788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 12:40:02.884055    2788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 12:40:02.893216    2788 command_runner.go:130] > b5213941
	I0401 12:40:02.908444    2788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 12:40:02.942219    2788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 12:40:02.948633    2788 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 12:40:02.950328    2788 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 12:40:02.950795    2788 kubeadm.go:391] StartCluster: {Name:multinode-965600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-965600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.154.221 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:40:02.960001    2788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0401 12:40:02.995753    2788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 12:40:03.015805    2788 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0401 12:40:03.015877    2788 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0401 12:40:03.015877    2788 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0401 12:40:03.032171    2788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 12:40:03.060148    2788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 12:40:03.078496    2788 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0401 12:40:03.078496    2788 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0401 12:40:03.078496    2788 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0401 12:40:03.078496    2788 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 12:40:03.078496    2788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 12:40:03.078496    2788 kubeadm.go:156] found existing configuration files:
	
	I0401 12:40:03.090158    2788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 12:40:03.104156    2788 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 12:40:03.105150    2788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 12:40:03.116168    2788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 12:40:03.147949    2788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 12:40:03.165580    2788 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 12:40:03.165952    2788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 12:40:03.178178    2788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 12:40:03.209759    2788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 12:40:03.228285    2788 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 12:40:03.229169    2788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 12:40:03.241272    2788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 12:40:03.273311    2788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 12:40:03.290286    2788 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 12:40:03.291757    2788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 12:40:03.302289    2788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 12:40:03.322280    2788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 12:40:03.861145    2788 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 12:40:03.861145    2788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 12:40:18.489775    2788 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 12:40:18.489830    2788 command_runner.go:130] > [init] Using Kubernetes version: v1.29.3
	I0401 12:40:18.489830    2788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 12:40:18.490012    2788 command_runner.go:130] > [preflight] Running pre-flight checks
	I0401 12:40:18.490126    2788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 12:40:18.490126    2788 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 12:40:18.490276    2788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 12:40:18.490349    2788 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 12:40:18.490746    2788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 12:40:18.490746    2788 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 12:40:18.490945    2788 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 12:40:18.490973    2788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 12:40:18.494117    2788 out.go:204]   - Generating certificates and keys ...
	I0401 12:40:18.495125    2788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 12:40:18.495125    2788 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0401 12:40:18.495125    2788 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0401 12:40:18.495125    2788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 12:40:18.495125    2788 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 12:40:18.495125    2788 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 12:40:18.495125    2788 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 12:40:18.495125    2788 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0401 12:40:18.495125    2788 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0401 12:40:18.495125    2788 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 12:40:18.495125    2788 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0401 12:40:18.495125    2788 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 12:40:18.496117    2788 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0401 12:40:18.496117    2788 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 12:40:18.496117    2788 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-965600] and IPs [172.19.154.221 127.0.0.1 ::1]
	I0401 12:40:18.496117    2788 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-965600] and IPs [172.19.154.221 127.0.0.1 ::1]
	I0401 12:40:18.496117    2788 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0401 12:40:18.496117    2788 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 12:40:18.496117    2788 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-965600] and IPs [172.19.154.221 127.0.0.1 ::1]
	I0401 12:40:18.496117    2788 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-965600] and IPs [172.19.154.221 127.0.0.1 ::1]
	I0401 12:40:18.496117    2788 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 12:40:18.496117    2788 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 12:40:18.497140    2788 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 12:40:18.497140    2788 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 12:40:18.497140    2788 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 12:40:18.497140    2788 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0401 12:40:18.497140    2788 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 12:40:18.497140    2788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 12:40:18.497140    2788 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 12:40:18.497140    2788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 12:40:18.497140    2788 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 12:40:18.497140    2788 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 12:40:18.497140    2788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 12:40:18.497140    2788 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 12:40:18.498210    2788 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 12:40:18.498210    2788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 12:40:18.498210    2788 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 12:40:18.498210    2788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 12:40:18.498210    2788 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 12:40:18.498210    2788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 12:40:18.498210    2788 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 12:40:18.498210    2788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 12:40:18.502147    2788 out.go:204]   - Booting up control plane ...
	I0401 12:40:18.502147    2788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 12:40:18.502147    2788 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 12:40:18.502147    2788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 12:40:18.502147    2788 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 12:40:18.502147    2788 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 12:40:18.502147    2788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 12:40:18.502147    2788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 12:40:18.502147    2788 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 12:40:18.503138    2788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 12:40:18.503138    2788 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 12:40:18.503138    2788 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0401 12:40:18.503138    2788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 12:40:18.503138    2788 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 12:40:18.503138    2788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 12:40:18.503138    2788 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.506076 seconds
	I0401 12:40:18.503138    2788 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.506076 seconds
	I0401 12:40:18.504120    2788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 12:40:18.504120    2788 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 12:40:18.504120    2788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 12:40:18.504120    2788 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 12:40:18.504120    2788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 12:40:18.504120    2788 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0401 12:40:18.504120    2788 kubeadm.go:309] [mark-control-plane] Marking the node multinode-965600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 12:40:18.504120    2788 command_runner.go:130] > [mark-control-plane] Marking the node multinode-965600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 12:40:18.505123    2788 command_runner.go:130] > [bootstrap-token] Using token: mr0bli.zitejgwbpg98scvu
	I0401 12:40:18.505123    2788 kubeadm.go:309] [bootstrap-token] Using token: mr0bli.zitejgwbpg98scvu
	I0401 12:40:18.507116    2788 out.go:204]   - Configuring RBAC rules ...
	I0401 12:40:18.508115    2788 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 12:40:18.508115    2788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 12:40:18.508115    2788 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 12:40:18.508115    2788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 12:40:18.508115    2788 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 12:40:18.508115    2788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 12:40:18.508115    2788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 12:40:18.508115    2788 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 12:40:18.509114    2788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 12:40:18.509114    2788 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 12:40:18.509114    2788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 12:40:18.509114    2788 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 12:40:18.509114    2788 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 12:40:18.509114    2788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 12:40:18.510118    2788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 12:40:18.510118    2788 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0401 12:40:18.510118    2788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 12:40:18.510118    2788 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0401 12:40:18.510118    2788 kubeadm.go:309] 
	I0401 12:40:18.510118    2788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 12:40:18.510118    2788 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0401 12:40:18.510118    2788 kubeadm.go:309] 
	I0401 12:40:18.510118    2788 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0401 12:40:18.510118    2788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 12:40:18.510118    2788 kubeadm.go:309] 
	I0401 12:40:18.510118    2788 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0401 12:40:18.510118    2788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 12:40:18.510118    2788 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 12:40:18.510118    2788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 12:40:18.510118    2788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 12:40:18.510118    2788 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 12:40:18.511143    2788 kubeadm.go:309] 
	I0401 12:40:18.511143    2788 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0401 12:40:18.511143    2788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 12:40:18.511143    2788 kubeadm.go:309] 
	I0401 12:40:18.511143    2788 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 12:40:18.511143    2788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 12:40:18.511143    2788 kubeadm.go:309] 
	I0401 12:40:18.511143    2788 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0401 12:40:18.511143    2788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 12:40:18.511143    2788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 12:40:18.511143    2788 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 12:40:18.511143    2788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 12:40:18.511143    2788 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 12:40:18.511143    2788 kubeadm.go:309] 
	I0401 12:40:18.512123    2788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 12:40:18.512123    2788 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0401 12:40:18.512123    2788 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0401 12:40:18.512123    2788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 12:40:18.512123    2788 kubeadm.go:309] 
	I0401 12:40:18.512123    2788 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token mr0bli.zitejgwbpg98scvu \
	I0401 12:40:18.512123    2788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mr0bli.zitejgwbpg98scvu \
	I0401 12:40:18.512123    2788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c \
	I0401 12:40:18.512123    2788 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c \
	I0401 12:40:18.512123    2788 kubeadm.go:309] 	--control-plane 
	I0401 12:40:18.512123    2788 command_runner.go:130] > 	--control-plane 
	I0401 12:40:18.512123    2788 kubeadm.go:309] 
	I0401 12:40:18.513126    2788 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0401 12:40:18.513126    2788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 12:40:18.513126    2788 kubeadm.go:309] 
	I0401 12:40:18.513126    2788 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mr0bli.zitejgwbpg98scvu \
	I0401 12:40:18.513126    2788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mr0bli.zitejgwbpg98scvu \
	I0401 12:40:18.513126    2788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c 
	I0401 12:40:18.513126    2788 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c 
	I0401 12:40:18.513126    2788 cni.go:84] Creating CNI manager for ""
	I0401 12:40:18.513126    2788 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 12:40:18.516117    2788 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 12:40:18.532726    2788 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 12:40:18.542727    2788 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0401 12:40:18.543295    2788 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0401 12:40:18.543295    2788 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0401 12:40:18.543295    2788 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0401 12:40:18.543295    2788 command_runner.go:130] > Access: 2024-04-01 12:38:19.302549900 +0000
	I0401 12:40:18.543295    2788 command_runner.go:130] > Modify: 2024-03-27 22:52:09.000000000 +0000
	I0401 12:40:18.543295    2788 command_runner.go:130] > Change: 2024-04-01 12:38:10.175000000 +0000
	I0401 12:40:18.543295    2788 command_runner.go:130] >  Birth: -
	I0401 12:40:18.543659    2788 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0401 12:40:18.543659    2788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0401 12:40:18.613898    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 12:40:19.368073    2788 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0401 12:40:19.368946    2788 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0401 12:40:19.368946    2788 command_runner.go:130] > serviceaccount/kindnet created
	I0401 12:40:19.368946    2788 command_runner.go:130] > daemonset.apps/kindnet created
	I0401 12:40:19.369090    2788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 12:40:19.384528    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:19.385536    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-965600 minikube.k8s.io/updated_at=2024_04_01T12_40_19_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=multinode-965600 minikube.k8s.io/primary=true
	I0401 12:40:19.419562    2788 command_runner.go:130] > -16
	I0401 12:40:19.419562    2788 ops.go:34] apiserver oom_adj: -16
	I0401 12:40:19.726062    2788 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0401 12:40:19.742220    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:19.755104    2788 command_runner.go:130] > node/multinode-965600 labeled
	I0401 12:40:19.979915    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:20.246288    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:20.383160    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:20.751782    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:20.889047    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:21.256557    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:21.382790    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:21.747814    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:21.876659    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:22.254294    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:22.397398    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:22.757370    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:22.911442    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:23.247655    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:23.366880    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:23.753826    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:23.911060    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:24.245742    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:24.386237    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:24.751026    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:24.865015    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:25.257777    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:25.366936    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:25.747090    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:25.891236    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:26.252341    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:26.378679    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:26.745844    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:26.880160    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:27.248097    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:27.368040    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:27.753624    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:27.873684    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:28.244710    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:28.384833    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:28.752394    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:28.925957    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:29.246617    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:29.371279    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:29.747828    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:29.866008    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:30.254512    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:30.378099    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:30.742701    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:30.864781    2788 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0401 12:40:31.251605    2788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:40:31.474622    2788 command_runner.go:130] > NAME      SECRETS   AGE
	I0401 12:40:31.474801    2788 command_runner.go:130] > default   0         0s
	I0401 12:40:31.474930    2788 kubeadm.go:1107] duration metric: took 12.1057547s to wait for elevateKubeSystemPrivileges
	W0401 12:40:31.475058    2788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 12:40:31.475117    2788 kubeadm.go:393] duration metric: took 28.5241229s to StartCluster
	I0401 12:40:31.475180    2788 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:31.475343    2788 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:40:31.477095    2788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:40:31.477989    2788 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.154.221 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 12:40:31.480996    2788 out.go:177] * Verifying Kubernetes components...
	I0401 12:40:31.478988    2788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 12:40:31.478988    2788 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:40:31.481953    2788 addons.go:69] Setting storage-provisioner=true in profile "multinode-965600"
	I0401 12:40:31.484950    2788 addons.go:234] Setting addon storage-provisioner=true in "multinode-965600"
	I0401 12:40:31.481953    2788 addons.go:69] Setting default-storageclass=true in profile "multinode-965600"
	I0401 12:40:31.484950    2788 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:40:31.484950    2788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-965600"
	I0401 12:40:31.485989    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:31.485989    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:31.501951    2788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:40:31.867078    2788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 12:40:31.964206    2788 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:40:31.965411    2788 kapi.go:59] client config for multinode-965600: &rest.Config{Host:"https://172.19.154.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-965600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-965600\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x236fd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 12:40:31.966408    2788 cert_rotation.go:137] Starting client certificate rotation controller
	I0401 12:40:31.967407    2788 node_ready.go:35] waiting up to 6m0s for node "multinode-965600" to be "Ready" ...
	I0401 12:40:31.967407    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:31.967407    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:31.967407    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:31.967407    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:31.987378    2788 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0401 12:40:31.987378    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:31.987470    2788 round_trippers.go:580]     Audit-Id: f5ecc1bb-e39a-45fe-8fbf-63ad5a0bbdaa
	I0401 12:40:31.987470    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:31.987470    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:31.987470    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:31.987470    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:31.987470    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:31 GMT
	I0401 12:40:31.987899    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:32.476442    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:32.476666    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:32.476666    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:32.476666    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:32.480552    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:32.481369    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:32.481439    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:32 GMT
	I0401 12:40:32.481439    2788 round_trippers.go:580]     Audit-Id: 7d194072-b864-4b9f-a89e-c42bb5b9fc9d
	I0401 12:40:32.481439    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:32.481439    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:32.481439    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:32.481544    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:32.482087    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:32.967743    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:32.968077    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:32.968077    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:32.968077    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:32.988212    2788 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0401 12:40:32.989154    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:32.989154    2788 round_trippers.go:580]     Audit-Id: b01cc54c-f19f-4355-b01d-616558344727
	I0401 12:40:32.989154    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:32.989154    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:32.989154    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:32.989154    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:32.989154    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:32 GMT
	I0401 12:40:32.989237    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:33.474551    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:33.474736    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:33.474736    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:33.474736    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:33.483659    2788 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 12:40:33.483659    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:33.483659    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:33.483659    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:33.483659    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:33.483659    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:33.483659    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:33 GMT
	I0401 12:40:33.483659    2788 round_trippers.go:580]     Audit-Id: 51dae0f2-e6d6-4b69-9f35-2bdbed9248de
	I0401 12:40:33.484200    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:33.944989    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:33.944989    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:33.946631    2788 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:40:33.947826    2788 kapi.go:59] client config for multinode-965600: &rest.Config{Host:"https://172.19.154.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-965600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-965600\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x236fd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 12:40:33.949023    2788 addons.go:234] Setting addon default-storageclass=true in "multinode-965600"
	I0401 12:40:33.949267    2788 host.go:66] Checking if "multinode-965600" exists ...
	I0401 12:40:33.950171    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:33.966162    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:33.966162    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:33.969167    2788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 12:40:33.974178    2788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 12:40:33.974178    2788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 12:40:33.974178    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:33.978172    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:33.978172    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:33.978172    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:33.978172    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:33.984164    2788 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 12:40:33.985186    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:33.985186    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:33.985186    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:33.985186    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:33.985186    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:33.985186    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:33 GMT
	I0401 12:40:33.985186    2788 round_trippers.go:580]     Audit-Id: 6dbe119a-35b9-4bb7-8fd1-64720c764dcf
	I0401 12:40:33.985186    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:33.986160    2788 node_ready.go:53] node "multinode-965600" has status "Ready":"False"
	I0401 12:40:34.468542    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:34.468639    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:34.468729    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:34.468729    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:34.472185    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:34.472462    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:34.472462    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:34 GMT
	I0401 12:40:34.472462    2788 round_trippers.go:580]     Audit-Id: a138a386-2623-48cb-b15f-9c86b51f2137
	I0401 12:40:34.472462    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:34.472462    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:34.472462    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:34.472462    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:34.473458    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:34.976457    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:34.976536    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:34.976536    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:34.976536    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:34.980096    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:34.980096    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:34.980856    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:34.980856    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:34 GMT
	I0401 12:40:34.980856    2788 round_trippers.go:580]     Audit-Id: 2157cb27-10c6-4e0b-bd9b-dc3f2a62ccee
	I0401 12:40:34.980856    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:34.980856    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:34.980856    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:34.981188    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:35.467783    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:35.467957    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:35.467957    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:35.467957    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:35.472454    2788 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 12:40:35.472642    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:35.472642    2788 round_trippers.go:580]     Audit-Id: 2fd6ae98-a51b-4ddf-b962-f51cf68286b6
	I0401 12:40:35.472642    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:35.472642    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:35.472642    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:35.472642    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:35.472642    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:35 GMT
	I0401 12:40:35.473088    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:35.974912    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:35.975099    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:35.975099    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:35.975099    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:35.978716    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:35.978716    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:35.978988    2788 round_trippers.go:580]     Audit-Id: f9039ace-c8c6-442f-9349-6d0d731b5806
	I0401 12:40:35.978988    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:35.978988    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:35.979074    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:35.979108    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:35.979108    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:35 GMT
	I0401 12:40:35.979796    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:36.358887    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:36.358887    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:36.359885    2788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 12:40:36.359885    2788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 12:40:36.359885    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600 ).state
	I0401 12:40:36.431737    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:36.431904    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:36.431979    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:40:36.481840    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:36.481840    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:36.481840    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:36.481840    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:36.485857    2788 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 12:40:36.485857    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:36.486601    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:36.486601    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:36 GMT
	I0401 12:40:36.486601    2788 round_trippers.go:580]     Audit-Id: be02f43f-0a71-479a-b467-cd1c28becd03
	I0401 12:40:36.486601    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:36.486601    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:36.486601    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:36.487284    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:36.488223    2788 node_ready.go:53] node "multinode-965600" has status "Ready":"False"
	I0401 12:40:36.976710    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:36.976710    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:36.976710    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:36.976710    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:36.985749    2788 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 12:40:36.986727    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:36.986727    2788 round_trippers.go:580]     Audit-Id: 7e174ed4-f9bf-4a59-8cde-f62e962cfca5
	I0401 12:40:36.986727    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:36.986727    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:36.986727    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:36.986727    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:36.986727    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:36 GMT
	I0401 12:40:36.986727    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:37.482033    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:37.482101    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:37.482101    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:37.482101    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:37.485694    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:37.485768    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:37.485768    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:37.485768    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:37.485768    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:37.485768    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:37 GMT
	I0401 12:40:37.485768    2788 round_trippers.go:580]     Audit-Id: 3413c191-f9d8-4ef5-877c-a74c5f322235
	I0401 12:40:37.485768    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:37.486482    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:37.971835    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:37.971909    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:37.971909    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:37.971909    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:37.975292    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:37.976224    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:37.976224    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:37.976224    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:37 GMT
	I0401 12:40:37.976224    2788 round_trippers.go:580]     Audit-Id: 89d75cf3-c899-493c-8110-2f94b1222a55
	I0401 12:40:37.976354    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:37.976354    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:37.976354    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:37.976579    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:38.479845    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:38.480061    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:38.480061    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:38.480061    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:38.773012    2788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:40:38.774016    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:38.774044    2788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:40:38.829393    2788 round_trippers.go:574] Response Status: 200 OK in 349 milliseconds
	I0401 12:40:38.829393    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:38.829393    2788 round_trippers.go:580]     Audit-Id: 395f18d5-3a94-4e2a-9cfa-74647c6dde39
	I0401 12:40:38.829933    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:38.829933    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:38.829933    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:38.829933    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:38.829933    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:38 GMT
	I0401 12:40:38.830406    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:38.830406    2788 node_ready.go:53] node "multinode-965600" has status "Ready":"False"
	I0401 12:40:38.975988    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:38.975988    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:38.976229    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:38.976229    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:39.069129    2788 round_trippers.go:574] Response Status: 200 OK in 92 milliseconds
	I0401 12:40:39.069129    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:39.069129    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:39 GMT
	I0401 12:40:39.069129    2788 round_trippers.go:580]     Audit-Id: 01e7f258-0353-471e-b85d-d879e67c5c1d
	I0401 12:40:39.069129    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:39.069129    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:39.069129    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:39.069129    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:39.069129    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:39.313914    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:40:39.313983    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:39.314471    2788 sshutil.go:53] new ssh client: &{IP:172.19.154.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:40:39.461880    2788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 12:40:39.470706    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:39.470706    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:39.470761    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:39.470761    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:39.474913    2788 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 12:40:39.475365    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:39.475365    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:39.475365    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:39.475365    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:39.475365    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:39 GMT
	I0401 12:40:39.475365    2788 round_trippers.go:580]     Audit-Id: e2772bcb-3a54-4b13-a1bb-123da270382b
	I0401 12:40:39.475365    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:39.475841    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:39.976512    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:39.976567    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:39.976567    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:39.976567    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:39.979642    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:39.980625    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:39.980649    2788 round_trippers.go:580]     Audit-Id: ebbc486e-8a5b-4327-b573-be3ec3cd376a
	I0401 12:40:39.980649    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:39.980649    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:39.980649    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:39.980649    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:39.980649    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:39 GMT
	I0401 12:40:39.980847    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:40.026784    2788 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0401 12:40:40.027752    2788 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0401 12:40:40.027752    2788 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0401 12:40:40.027752    2788 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0401 12:40:40.027752    2788 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0401 12:40:40.027752    2788 command_runner.go:130] > pod/storage-provisioner created
	I0401 12:40:40.471201    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:40.471258    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:40.471258    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:40.471258    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:40.475207    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:40.475207    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:40.475207    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:40 GMT
	I0401 12:40:40.475207    2788 round_trippers.go:580]     Audit-Id: dc0cf80f-2473-420f-a705-cf1918e3203f
	I0401 12:40:40.475207    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:40.475207    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:40.475207    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:40.475207    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:40.475751    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"332","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4936 chars]
	I0401 12:40:40.980229    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:40.980286    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:40.980286    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:40.980286    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:40.983975    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:40.983975    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:40.983975    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:40.983975    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:40.983975    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:40.983975    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:40 GMT
	I0401 12:40:40.983975    2788 round_trippers.go:580]     Audit-Id: 8c21703b-ac32-4901-a90d-38055e527683
	I0401 12:40:40.983975    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:40.985149    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:40.985685    2788 node_ready.go:49] node "multinode-965600" has status "Ready":"True"
	I0401 12:40:40.985685    2788 node_ready.go:38] duration metric: took 9.018215s for node "multinode-965600" to be "Ready" ...
	I0401 12:40:40.985860    2788 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 12:40:40.985932    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods
	I0401 12:40:40.985932    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:40.985932    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:40.985932    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:40.991735    2788 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 12:40:40.992082    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:40.992082    2788 round_trippers.go:580]     Audit-Id: 16a5c56e-0dd5-4f73-a0ea-fb03b0c3a192
	I0401 12:40:40.992082    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:40.992082    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:40.992082    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:40.992082    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:40.992162    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:40 GMT
	I0401 12:40:40.993701    2788 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"410","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62136 chars]
	I0401 12:40:41.004206    2788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vhxkq" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:41.004206    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vhxkq
	I0401 12:40:41.004206    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:41.004206    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:41.004206    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:41.007905    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:41.007905    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:41.007905    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:41.008393    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:41.008393    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:41.008393    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:41.008487    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:41 GMT
	I0401 12:40:41.008487    2788 round_trippers.go:580]     Audit-Id: 2dc9fc88-1200-47a3-8e39-750e34f3a7a8
	I0401 12:40:41.008881    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"410","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0401 12:40:41.009505    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:41.009505    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:41.009505    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:41.009505    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:41.012219    2788 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 12:40:41.013002    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:41.013002    2788 round_trippers.go:580]     Audit-Id: d9ba24de-41dd-4095-8977-b23aa30238dc
	I0401 12:40:41.013002    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:41.013002    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:41.013002    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:41.013002    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:41.013002    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:41 GMT
	I0401 12:40:41.013364    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:41.516899    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vhxkq
	I0401 12:40:41.516899    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:41.516899    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:41.516899    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:41.521827    2788 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 12:40:41.521827    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:41.521827    2788 round_trippers.go:580]     Audit-Id: 8a901ab8-596d-4952-8a2c-8250d3d069ca
	I0401 12:40:41.522320    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:41.522320    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:41.522320    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:41.522320    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:41.522363    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:41 GMT
	I0401 12:40:41.522472    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"410","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0401 12:40:41.523063    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:41.523063    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:41.523063    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:41.523063    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:41.529815    2788 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 12:40:41.529815    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:41.529815    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:41.529815    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:41.529815    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:41.529815    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:41 GMT
	I0401 12:40:41.529815    2788 round_trippers.go:580]     Audit-Id: 1772e1e0-5bcb-4d97-b4d1-05b8e0d8f7af
	I0401 12:40:41.529815    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:41.530492    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:41.597469    2788 main.go:141] libmachine: [stdout =====>] : 172.19.154.221
	
	I0401 12:40:41.597469    2788 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:40:41.597469    2788 sshutil.go:53] new ssh client: &{IP:172.19.154.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600\id_rsa Username:docker}
	I0401 12:40:41.791566    2788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 12:40:42.008184    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vhxkq
	I0401 12:40:42.008184    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:42.008184    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:42.008184    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:42.013689    2788 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 12:40:42.013689    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:42.013689    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:42.013689    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:42.013689    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:42.013689    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:42 GMT
	I0401 12:40:42.013689    2788 round_trippers.go:580]     Audit-Id: 0dda5449-5ba1-4ff5-97a5-65446a39bbad
	I0401 12:40:42.013689    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:42.014389    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"410","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0401 12:40:42.015272    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:42.015272    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:42.015272    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:42.015272    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:42.018095    2788 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 12:40:42.018095    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:42.018095    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:42.018095    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:42.018095    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:42 GMT
	I0401 12:40:42.018095    2788 round_trippers.go:580]     Audit-Id: c7c96f87-424d-4a9d-9089-120de9a6d17d
	I0401 12:40:42.018095    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:42.018095    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:42.019684    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:42.087054    2788 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0401 12:40:42.090077    2788 round_trippers.go:463] GET https://172.19.154.221:8443/apis/storage.k8s.io/v1/storageclasses
	I0401 12:40:42.090077    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:42.090077    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:42.090077    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:42.094749    2788 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 12:40:42.095163    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:42.095163    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:42 GMT
	I0401 12:40:42.095163    2788 round_trippers.go:580]     Audit-Id: efc395f6-bb31-4053-be32-c51dee55c416
	I0401 12:40:42.095204    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:42.095204    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:42.095204    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:42.095204    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:42.095204    2788 round_trippers.go:580]     Content-Length: 1273
	I0401 12:40:42.095276    2788 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"standard","uid":"ca7db9d2-9750-4431-ae03-321ce1a6f894","resourceVersion":"420","creationTimestamp":"2024-04-01T12:40:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-01T12:40:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0401 12:40:42.096091    2788 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ca7db9d2-9750-4431-ae03-321ce1a6f894","resourceVersion":"420","creationTimestamp":"2024-04-01T12:40:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-01T12:40:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0401 12:40:42.096173    2788 round_trippers.go:463] PUT https://172.19.154.221:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0401 12:40:42.096243    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:42.096300    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:42.096300    2788 round_trippers.go:473]     Content-Type: application/json
	I0401 12:40:42.096300    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:42.101102    2788 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 12:40:42.101102    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:42.101197    2788 round_trippers.go:580]     Audit-Id: e0abe984-4c19-4d3f-98b4-d41f631c4989
	I0401 12:40:42.101197    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:42.101197    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:42.101197    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:42.101197    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:42.101197    2788 round_trippers.go:580]     Content-Length: 1220
	I0401 12:40:42.101197    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:42 GMT
	I0401 12:40:42.101327    2788 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ca7db9d2-9750-4431-ae03-321ce1a6f894","resourceVersion":"420","creationTimestamp":"2024-04-01T12:40:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-01T12:40:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0401 12:40:42.106186    2788 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 12:40:42.108345    2788 addons.go:505] duration metric: took 10.6292834s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 12:40:42.513820    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vhxkq
	I0401 12:40:42.513820    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:42.513820    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:42.513820    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:42.518403    2788 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 12:40:42.518403    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:42.518403    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:42 GMT
	I0401 12:40:42.518403    2788 round_trippers.go:580]     Audit-Id: ac4a8f1f-b852-4284-8690-e372f27edb8d
	I0401 12:40:42.519258    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:42.519258    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:42.519258    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:42.519301    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:42.519429    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"410","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0401 12:40:42.520692    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:42.520692    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:42.520777    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:42.520777    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:42.524193    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:42.524193    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:42.524193    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:42.524193    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:42 GMT
	I0401 12:40:42.524193    2788 round_trippers.go:580]     Audit-Id: 14ed2f45-03cb-4bf7-ab9f-009ab2c24f6d
	I0401 12:40:42.524193    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:42.524904    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:42.524904    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:42.525141    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:43.016606    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vhxkq
	I0401 12:40:43.016606    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:43.016606    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:43.016606    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:43.020187    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:43.020187    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:43.020187    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:43.020187    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:43.020187    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:43.020187    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:43 GMT
	I0401 12:40:43.021125    2788 round_trippers.go:580]     Audit-Id: 89be0429-bf05-4ae7-a21d-9255ce599d74
	I0401 12:40:43.021125    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:43.021529    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"410","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0401 12:40:43.022370    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:43.022429    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:43.022429    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:43.022429    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:43.025190    2788 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 12:40:43.025440    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:43.025440    2788 round_trippers.go:580]     Audit-Id: b6f006c0-6c08-40bf-ab42-3fd11703b575
	I0401 12:40:43.025440    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:43.025440    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:43.025440    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:43.025440    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:43.025440    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:43 GMT
	I0401 12:40:43.025639    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:43.026136    2788 pod_ready.go:102] pod "coredns-76f75df574-vhxkq" in "kube-system" namespace has status "Ready":"False"
	I0401 12:40:43.518092    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vhxkq
	I0401 12:40:43.518164    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:43.518164    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:43.518164    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:43.539638    2788 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0401 12:40:43.540191    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:43.540191    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:43.540329    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:43.540329    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:43 GMT
	I0401 12:40:43.540329    2788 round_trippers.go:580]     Audit-Id: c7a5cbfb-4d79-45df-a46c-6fd3c2bcadac
	I0401 12:40:43.540329    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:43.540329    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:43.540831    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"431","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0401 12:40:43.541807    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:43.541867    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:43.541867    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:43.541867    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:43.560555    2788 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0401 12:40:43.561527    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:43.561527    2788 round_trippers.go:580]     Audit-Id: 698d0a70-596e-41b9-8fb5-1e9f9d41682e
	I0401 12:40:43.561527    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:43.561527    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:43.561527    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:43.561527    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:43.561527    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:43 GMT
	I0401 12:40:43.561527    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:43.562532    2788 pod_ready.go:92] pod "coredns-76f75df574-vhxkq" in "kube-system" namespace has status "Ready":"True"
	I0401 12:40:43.562532    2788 pod_ready.go:81] duration metric: took 2.5583088s for pod "coredns-76f75df574-vhxkq" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:43.562600    2788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wbm5g" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:43.562718    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-wbm5g
	I0401 12:40:43.562794    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:43.562794    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:43.562794    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:43.573205    2788 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 12:40:43.573205    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:43.573205    2788 round_trippers.go:580]     Audit-Id: 39b6dc16-8fbc-4f72-ab83-45b2501000a6
	I0401 12:40:43.573284    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:43.573284    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:43.573284    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:43.573284    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:43.573284    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:43 GMT
	I0401 12:40:43.573607    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-wbm5g","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"c40f0e5d-01bc-4222-953b-b253a72a624a","resourceVersion":"412","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0401 12:40:43.574057    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:43.574057    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:43.574057    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:43.574057    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:43.577499    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:43.577499    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:43.577499    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:43.577499    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:43.578543    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:43.578543    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:43.578543    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:43 GMT
	I0401 12:40:43.578543    2788 round_trippers.go:580]     Audit-Id: 21ac8383-fd59-42ef-8e67-a441c373738a
	I0401 12:40:43.578634    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:44.068368    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-wbm5g
	I0401 12:40:44.068446    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.068446    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.068446    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.072749    2788 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 12:40:44.072749    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.072749    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.072749    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.072749    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.072749    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.072857    2788 round_trippers.go:580]     Audit-Id: 3365860e-9719-4ab1-b4c4-241c246955b9
	I0401 12:40:44.072857    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.073161    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-wbm5g","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"c40f0e5d-01bc-4222-953b-b253a72a624a","resourceVersion":"437","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0401 12:40:44.073978    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:44.073978    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.073978    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.073978    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.079408    2788 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 12:40:44.079616    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.079662    2788 round_trippers.go:580]     Audit-Id: 568f9db6-af5c-4bd1-bc4e-cf48bdb5dcd4
	I0401 12:40:44.079662    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.079662    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.079662    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.079662    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.079662    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.079722    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:44.080528    2788 pod_ready.go:92] pod "coredns-76f75df574-wbm5g" in "kube-system" namespace has status "Ready":"True"
	I0401 12:40:44.080528    2788 pod_ready.go:81] duration metric: took 517.9251ms for pod "coredns-76f75df574-wbm5g" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.080528    2788 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-965600" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.080528    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-965600
	I0401 12:40:44.080528    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.080528    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.080528    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.083686    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:44.083686    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.083686    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.083686    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.083686    2788 round_trippers.go:580]     Audit-Id: 4082be41-f52f-4a37-b382-f3b1cf0dd76f
	I0401 12:40:44.083686    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.083686    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.083686    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.083686    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-965600","namespace":"kube-system","uid":"624b0a0d-9ae8-446f-b657-5d9860fdc55c","resourceVersion":"390","creationTimestamp":"2024-04-01T12:40:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.154.221:2379","kubernetes.io/config.hash":"18e528749de59cce7af1d66c677e247c","kubernetes.io/config.mirror":"18e528749de59cce7af1d66c677e247c","kubernetes.io/config.seen":"2024-04-01T12:40:18.508462306Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0401 12:40:44.084865    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:44.084865    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.084939    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.084939    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.087703    2788 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 12:40:44.087703    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.087703    2788 round_trippers.go:580]     Audit-Id: e28e6680-679a-4578-a28a-034116d08f2c
	I0401 12:40:44.087703    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.087703    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.088123    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.088123    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.088123    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.088421    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:44.088549    2788 pod_ready.go:92] pod "etcd-multinode-965600" in "kube-system" namespace has status "Ready":"True"
	I0401 12:40:44.088549    2788 pod_ready.go:81] duration metric: took 8.0211ms for pod "etcd-multinode-965600" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.088549    2788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-965600" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.088549    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-965600
	I0401 12:40:44.088549    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.088549    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.088549    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.092152    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:44.092152    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.092152    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.092152    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.092152    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.092152    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.092152    2788 round_trippers.go:580]     Audit-Id: 8f169231-885e-4748-aaac-55d1d360501c
	I0401 12:40:44.092152    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.092984    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-965600","namespace":"kube-system","uid":"51e82653-9e1e-4e36-b46b-5b8a9b6fbebd","resourceVersion":"387","creationTimestamp":"2024-04-01T12:40:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.154.221:8443","kubernetes.io/config.hash":"46b133d84015de92df22605554f93c5c","kubernetes.io/config.mirror":"46b133d84015de92df22605554f93c5c","kubernetes.io/config.seen":"2024-04-01T12:40:18.508468606Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0401 12:40:44.093093    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:44.093093    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.093093    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.093093    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.096716    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:44.097068    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.097068    2788 round_trippers.go:580]     Audit-Id: 69958d7d-51ff-47f2-b400-1e73de1df2e9
	I0401 12:40:44.097068    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.097068    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.097068    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.097068    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.097068    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.097635    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:44.098005    2788 pod_ready.go:92] pod "kube-apiserver-multinode-965600" in "kube-system" namespace has status "Ready":"True"
	I0401 12:40:44.098005    2788 pod_ready.go:81] duration metric: took 9.4551ms for pod "kube-apiserver-multinode-965600" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.098005    2788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-965600" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.098005    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-965600
	I0401 12:40:44.098005    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.098005    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.098005    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.100821    2788 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 12:40:44.100821    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.101785    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.101785    2788 round_trippers.go:580]     Audit-Id: ed2de209-4e36-4302-86a0-f7ef78946a5c
	I0401 12:40:44.101785    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.101785    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.101785    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.101785    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.101785    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-965600","namespace":"kube-system","uid":"979ed567-5ef7-4a02-b458-09b771e5334d","resourceVersion":"389","creationTimestamp":"2024-04-01T12:40:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"459615590dcc4ac3b5fda6e702322cb9","kubernetes.io/config.mirror":"459615590dcc4ac3b5fda6e702322cb9","kubernetes.io/config.seen":"2024-04-01T12:40:18.508470406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0401 12:40:44.102444    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:44.102444    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.102444    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.102444    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.106225    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:44.106426    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.106426    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.106507    2788 round_trippers.go:580]     Audit-Id: 059f1e5a-7f8b-4953-ba72-25d8d6a8d69b
	I0401 12:40:44.106507    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.106507    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.106507    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.106507    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.106507    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:44.107282    2788 pod_ready.go:92] pod "kube-controller-manager-multinode-965600" in "kube-system" namespace has status "Ready":"True"
	I0401 12:40:44.107282    2788 pod_ready.go:81] duration metric: took 9.2771ms for pod "kube-controller-manager-multinode-965600" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.107282    2788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-426zj" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.131872    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-426zj
	I0401 12:40:44.132071    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.132071    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.132071    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.134326    2788 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 12:40:44.134326    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.134326    2788 round_trippers.go:580]     Audit-Id: 066031a7-08a6-460b-8861-23a550804741
	I0401 12:40:44.134326    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.134326    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.134326    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.134326    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.135333    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.135679    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-426zj","generateName":"kube-proxy-","namespace":"kube-system","uid":"521b81d0-8859-44fb-baa6-33e43d5d5b9b","resourceVersion":"382","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"62c55264-7de9-4ca6-af3d-027b80546b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62c55264-7de9-4ca6-af3d-027b80546b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0401 12:40:44.318356    2788 request.go:629] Waited for 181.6009ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:44.318608    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:44.318608    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.318608    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.318608    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.322199    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:44.322199    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.322199    2788 round_trippers.go:580]     Audit-Id: 199b2a5b-7bf2-4640-b265-5b82f973c811
	I0401 12:40:44.322199    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.322199    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.322199    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.322199    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.322611    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.323192    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:44.324213    2788 pod_ready.go:92] pod "kube-proxy-426zj" in "kube-system" namespace has status "Ready":"True"
	I0401 12:40:44.324276    2788 pod_ready.go:81] duration metric: took 216.993ms for pod "kube-proxy-426zj" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.324276    2788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-965600" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.520669    2788 request.go:629] Waited for 196.0792ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-965600
	I0401 12:40:44.520767    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-965600
	I0401 12:40:44.521007    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.521007    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.521007    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.524596    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:44.525585    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.525585    2788 round_trippers.go:580]     Audit-Id: ac723e43-853d-4243-8faf-a04865bae41c
	I0401 12:40:44.525585    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.525585    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.525585    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.525694    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.525694    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.525906    2788 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-965600","namespace":"kube-system","uid":"cbd798aa-4cf4-4a64-9204-384943212fac","resourceVersion":"391","creationTimestamp":"2024-04-01T12:40:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c55d26d2c85fa57656844950d423f175","kubernetes.io/config.mirror":"c55d26d2c85fa57656844950d423f175","kubernetes.io/config.seen":"2024-04-01T12:40:18.508471606Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0401 12:40:44.723305    2788 request.go:629] Waited for 196.6909ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:44.723606    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes/multinode-965600
	I0401 12:40:44.723606    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.723680    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.723680    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.727473    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:44.727473    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.727473    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.727473    2788 round_trippers.go:580]     Audit-Id: 05eb5de3-c8e9-4bcf-9b4f-cad1ca0fec2f
	I0401 12:40:44.727473    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.727473    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.727473    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.727473    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.727473    2788 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-01T12:40:14Z","fieldsType":"Fields [truncated 4791 chars]
	I0401 12:40:44.728435    2788 pod_ready.go:92] pod "kube-scheduler-multinode-965600" in "kube-system" namespace has status "Ready":"True"
	I0401 12:40:44.728435    2788 pod_ready.go:81] duration metric: took 404.1563ms for pod "kube-scheduler-multinode-965600" in "kube-system" namespace to be "Ready" ...
	I0401 12:40:44.728498    2788 pod_ready.go:38] duration metric: took 3.7426118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 12:40:44.728498    2788 api_server.go:52] waiting for apiserver process to appear ...
	I0401 12:40:44.744382    2788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 12:40:44.773275    2788 command_runner.go:130] > 2225
	I0401 12:40:44.773729    2788 api_server.go:72] duration metric: took 13.2946483s to wait for apiserver process to appear ...
	I0401 12:40:44.773729    2788 api_server.go:88] waiting for apiserver healthz status ...
	I0401 12:40:44.773729    2788 api_server.go:253] Checking apiserver healthz at https://172.19.154.221:8443/healthz ...
	I0401 12:40:44.781565    2788 api_server.go:279] https://172.19.154.221:8443/healthz returned 200:
	ok
	I0401 12:40:44.781791    2788 round_trippers.go:463] GET https://172.19.154.221:8443/version
	I0401 12:40:44.781829    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.781829    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.781829    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.783653    2788 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 12:40:44.783653    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.783653    2788 round_trippers.go:580]     Audit-Id: 1f6c3011-aed0-4e30-a5c6-d1e766512d41
	I0401 12:40:44.783653    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.783653    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.783653    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.783653    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.783950    2788 round_trippers.go:580]     Content-Length: 263
	I0401 12:40:44.783950    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.784156    2788 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0401 12:40:44.784330    2788 api_server.go:141] control plane version: v1.29.3
	I0401 12:40:44.784408    2788 api_server.go:131] duration metric: took 10.6791ms to wait for apiserver health ...
	I0401 12:40:44.784453    2788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 12:40:44.927505    2788 request.go:629] Waited for 142.8972ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods
	I0401 12:40:44.927747    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods
	I0401 12:40:44.927747    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:44.927747    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:44.927747    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:44.938797    2788 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 12:40:44.939149    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:44.939149    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:44.939149    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:44.939149    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:44 GMT
	I0401 12:40:44.939149    2788 round_trippers.go:580]     Audit-Id: 5da5d694-4a5d-47ac-8fbc-5941cf2bea7d
	I0401 12:40:44.939149    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:44.939149    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:44.942036    2788 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"442"},"items":[{"metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"431","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 64071 chars]
	I0401 12:40:44.945215    2788 system_pods.go:59] 9 kube-system pods found
	I0401 12:40:44.945362    2788 system_pods.go:61] "coredns-76f75df574-vhxkq" [4af8fdd8-85f7-47c9-815c-80ca21486d61] Running
	I0401 12:40:44.945362    2788 system_pods.go:61] "coredns-76f75df574-wbm5g" [c40f0e5d-01bc-4222-953b-b253a72a624a] Running
	I0401 12:40:44.945362    2788 system_pods.go:61] "etcd-multinode-965600" [624b0a0d-9ae8-446f-b657-5d9860fdc55c] Running
	I0401 12:40:44.945362    2788 system_pods.go:61] "kindnet-pfltb" [f1bc27de-58b6-4b36-99bd-6bdb473fc573] Running
	I0401 12:40:44.945362    2788 system_pods.go:61] "kube-apiserver-multinode-965600" [51e82653-9e1e-4e36-b46b-5b8a9b6fbebd] Running
	I0401 12:40:44.945362    2788 system_pods.go:61] "kube-controller-manager-multinode-965600" [979ed567-5ef7-4a02-b458-09b771e5334d] Running
	I0401 12:40:44.945362    2788 system_pods.go:61] "kube-proxy-426zj" [521b81d0-8859-44fb-baa6-33e43d5d5b9b] Running
	I0401 12:40:44.945362    2788 system_pods.go:61] "kube-scheduler-multinode-965600" [cbd798aa-4cf4-4a64-9204-384943212fac] Running
	I0401 12:40:44.945362    2788 system_pods.go:61] "storage-provisioner" [0d2b4875-623c-48e9-9a6c-d4a18ae61c82] Running
	I0401 12:40:44.945443    2788 system_pods.go:74] duration metric: took 160.8726ms to wait for pod list to return data ...
	I0401 12:40:44.945443    2788 default_sa.go:34] waiting for default service account to be created ...
	I0401 12:40:45.129977    2788 request.go:629] Waited for 184.2101ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.154.221:8443/api/v1/namespaces/default/serviceaccounts
	I0401 12:40:45.130066    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/default/serviceaccounts
	I0401 12:40:45.130066    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:45.130066    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:45.130066    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:45.133823    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:45.133823    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:45.133823    2788 round_trippers.go:580]     Audit-Id: 973b380c-abdb-42dc-9a99-89f9a472c683
	I0401 12:40:45.133823    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:45.134187    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:45.134187    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:45.134187    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:45.134187    2788 round_trippers.go:580]     Content-Length: 261
	I0401 12:40:45.134187    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:45 GMT
	I0401 12:40:45.134265    2788 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"8dff8036-ba87-495b-b9f5-b4c126285846","resourceVersion":"344","creationTimestamp":"2024-04-01T12:40:31Z"}}]}
	I0401 12:40:45.134584    2788 default_sa.go:45] found service account: "default"
	I0401 12:40:45.134584    2788 default_sa.go:55] duration metric: took 189.1399ms for default service account to be created ...
	I0401 12:40:45.134642    2788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 12:40:45.318420    2788 request.go:629] Waited for 183.5147ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods
	I0401 12:40:45.318420    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/namespaces/kube-system/pods
	I0401 12:40:45.318569    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:45.318569    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:45.318569    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:45.327384    2788 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0401 12:40:45.328246    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:45.328246    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:45.328246    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:45.328246    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:45 GMT
	I0401 12:40:45.328246    2788 round_trippers.go:580]     Audit-Id: 543a3ef4-a845-4b44-af6e-c93910d24ea2
	I0401 12:40:45.328246    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:45.328246    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:45.329296    2788 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-76f75df574-vhxkq","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"4af8fdd8-85f7-47c9-815c-80ca21486d61","resourceVersion":"431","creationTimestamp":"2024-04-01T12:40:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"6364b14a-b0df-4bc0-bf67-b4c36f22c434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-01T12:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6364b14a-b0df-4bc0-bf67-b4c36f22c434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 64071 chars]
	I0401 12:40:45.332306    2788 system_pods.go:86] 9 kube-system pods found
	I0401 12:40:45.332374    2788 system_pods.go:89] "coredns-76f75df574-vhxkq" [4af8fdd8-85f7-47c9-815c-80ca21486d61] Running
	I0401 12:40:45.332374    2788 system_pods.go:89] "coredns-76f75df574-wbm5g" [c40f0e5d-01bc-4222-953b-b253a72a624a] Running
	I0401 12:40:45.332374    2788 system_pods.go:89] "etcd-multinode-965600" [624b0a0d-9ae8-446f-b657-5d9860fdc55c] Running
	I0401 12:40:45.332374    2788 system_pods.go:89] "kindnet-pfltb" [f1bc27de-58b6-4b36-99bd-6bdb473fc573] Running
	I0401 12:40:45.332374    2788 system_pods.go:89] "kube-apiserver-multinode-965600" [51e82653-9e1e-4e36-b46b-5b8a9b6fbebd] Running
	I0401 12:40:45.332442    2788 system_pods.go:89] "kube-controller-manager-multinode-965600" [979ed567-5ef7-4a02-b458-09b771e5334d] Running
	I0401 12:40:45.332442    2788 system_pods.go:89] "kube-proxy-426zj" [521b81d0-8859-44fb-baa6-33e43d5d5b9b] Running
	I0401 12:40:45.332442    2788 system_pods.go:89] "kube-scheduler-multinode-965600" [cbd798aa-4cf4-4a64-9204-384943212fac] Running
	I0401 12:40:45.332442    2788 system_pods.go:89] "storage-provisioner" [0d2b4875-623c-48e9-9a6c-d4a18ae61c82] Running
	I0401 12:40:45.332511    2788 system_pods.go:126] duration metric: took 197.7983ms to wait for k8s-apps to be running ...
	I0401 12:40:45.332553    2788 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 12:40:45.345703    2788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 12:40:45.378134    2788 system_svc.go:56] duration metric: took 45.6226ms WaitForService to wait for kubelet
	I0401 12:40:45.378134    2788 kubeadm.go:576] duration metric: took 13.8990487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 12:40:45.378134    2788 node_conditions.go:102] verifying NodePressure condition ...
	I0401 12:40:45.519087    2788 request.go:629] Waited for 140.9522ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.154.221:8443/api/v1/nodes
	I0401 12:40:45.519348    2788 round_trippers.go:463] GET https://172.19.154.221:8443/api/v1/nodes
	I0401 12:40:45.519348    2788 round_trippers.go:469] Request Headers:
	I0401 12:40:45.519348    2788 round_trippers.go:473]     Accept: application/json, */*
	I0401 12:40:45.519458    2788 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0401 12:40:45.523112    2788 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 12:40:45.523112    2788 round_trippers.go:577] Response Headers:
	I0401 12:40:45.523112    2788 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07e14c58-6771-439c-b04e-6d0672a5359c
	I0401 12:40:45.523112    2788 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f432cc74-d415-4a04-bee8-cc6b4b586974
	I0401 12:40:45.523112    2788 round_trippers.go:580]     Date: Mon, 01 Apr 2024 12:40:45 GMT
	I0401 12:40:45.523112    2788 round_trippers.go:580]     Audit-Id: 9e9d99d4-bf70-426a-81a3-e031c6aaa9e9
	I0401 12:40:45.523112    2788 round_trippers.go:580]     Cache-Control: no-cache, private
	I0401 12:40:45.523112    2788 round_trippers.go:580]     Content-Type: application/json
	I0401 12:40:45.523801    2788 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"multinode-965600","uid":"a47ad1b6-4ac1-451e-ac14-b129816c236b","resourceVersion":"406","creationTimestamp":"2024-04-01T12:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-965600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8aa0d860b7e6047018bc1a9124397cd2c931e0d","minikube.k8s.io/name":"multinode-965600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_01T12_40_19_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4844 chars]
	I0401 12:40:45.524489    2788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 12:40:45.524489    2788 node_conditions.go:123] node cpu capacity is 2
	I0401 12:40:45.524561    2788 node_conditions.go:105] duration metric: took 146.4261ms to run NodePressure ...
	I0401 12:40:45.524561    2788 start.go:240] waiting for startup goroutines ...
	I0401 12:40:45.524617    2788 start.go:245] waiting for cluster config update ...
	I0401 12:40:45.524617    2788 start.go:254] writing updated cluster config ...
	I0401 12:40:45.537373    2788 ssh_runner.go:195] Run: rm -f paused
	I0401 12:40:45.699326    2788 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 12:40:45.703101    2788 out.go:177] * Done! kubectl is now configured to use "multinode-965600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.407678550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.408085147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.418553372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.418906069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.419143968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.419612264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:41 multinode-965600 cri-dockerd[1236]: time="2024-04-01T12:40:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/68baba24d8cad6205fb428c00ceb9bfac5553ccd1336d03f3c1e5ed764159b34/resolv.conf as [nameserver 172.19.144.1]"
	Apr 01 12:40:41 multinode-965600 cri-dockerd[1236]: time="2024-04-01T12:40:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c3e5fcb7cd04f4db53fe20c9f82ea976f101a7fcb369a9fd553d548f791943b/resolv.conf as [nameserver 172.19.144.1]"
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.950703315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.950804015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.950819215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.953925199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.044358560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.044689768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.044832171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.045317284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.213955306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.214218312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.214321515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.214607922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 cri-dockerd[1236]: time="2024-04-01T12:40:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4d4f816d5225de88e2b9bad13f6620cfba528082ffc80dae33ad6f287ee5759e/resolv.conf as [nameserver 172.19.144.1]"
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.607110949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.609646013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.609796917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.610127625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	891f5bd08d881       6e38f40d628db                                                                              38 seconds ago       Running             storage-provisioner       0                   4d4f816d5225d       storage-provisioner
	22dd1fb8d37bf       cbb01a7bd410d                                                                              39 seconds ago       Running             coredns                   0                   8c3e5fcb7cd04       coredns-76f75df574-wbm5g
	54e585be3aa8a       cbb01a7bd410d                                                                              39 seconds ago       Running             coredns                   0                   68baba24d8cad       coredns-76f75df574-vhxkq
	a2f7ee6fd8ef6       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988   42 seconds ago       Running             kindnet-cni               0                   b6d0f9166fdae       kindnet-pfltb
	613f71eef82cf       a1d263b5dc5b0                                                                              48 seconds ago       Running             kube-proxy                0                   04e418a4ad3e0       kube-proxy-426zj
	06e3c5e6bd4c9       8c390d98f50c0                                                                              About a minute ago   Running             kube-scheduler            0                   c44326475a9ec       kube-scheduler-multinode-965600
	dca4edaf448be       6052a25da3f97                                                                              About a minute ago   Running             kube-controller-manager   0                   a717fef2af390       kube-controller-manager-multinode-965600
	ef878fda62c3c       39f995c9f1996                                                                              About a minute ago   Running             kube-apiserver            0                   b7c0d239b48c6       kube-apiserver-multinode-965600
	4b2ab1b996436       3861cfcd7c04c                                                                              About a minute ago   Running             etcd                      0                   d24092c77cc2f       etcd-multinode-965600
	
	
	==> coredns [22dd1fb8d37b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [54e585be3aa8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               multinode-965600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-965600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=multinode-965600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T12_40_19_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 12:40:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-965600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 12:41:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 12:40:49 +0000   Mon, 01 Apr 2024 12:40:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 12:40:49 +0000   Mon, 01 Apr 2024 12:40:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 12:40:49 +0000   Mon, 01 Apr 2024 12:40:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 12:40:49 +0000   Mon, 01 Apr 2024 12:40:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.154.221
	  Hostname:    multinode-965600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7bf740c4c3e4623af73d139ade1038b
	  System UUID:                5de17ba9-5551-ee4e-8bca-5a015d97e7a1
	  Boot ID:                    e629ae01-f590-4fc1-841d-eb6d8fae7343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-vhxkq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     49s
	  kube-system                 coredns-76f75df574-wbm5g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     49s
	  kube-system                 etcd-multinode-965600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 kindnet-pfltb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      49s
	  kube-system                 kube-apiserver-multinode-965600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-controller-manager-multinode-965600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-proxy-426zj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kube-scheduler-multinode-965600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x6 over 71s)  kubelet          Node multinode-965600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x5 over 71s)  kubelet          Node multinode-965600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x5 over 71s)  kubelet          Node multinode-965600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s                kubelet          Node multinode-965600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s                kubelet          Node multinode-965600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s                kubelet          Node multinode-965600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           49s                node-controller  Node multinode-965600 event: Registered Node multinode-965600 in Controller
	  Normal  NodeReady                40s                kubelet          Node multinode-965600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 1 12:39] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.107873] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.083636] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[ +28.153562] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.113276] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.613126] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.225954] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.243774] systemd-fstab-generator[1019]: Ignoring "noauto" option for root device
	[  +2.877747] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.218030] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.220297] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.308078] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[ +12.256398] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +0.140340] kauditd_printk_skb: 205 callbacks suppressed
	[Apr 1 12:40] systemd-fstab-generator[1537]: Ignoring "noauto" option for root device
	[  +6.961001] systemd-fstab-generator[1803]: Ignoring "noauto" option for root device
	[  +0.111266] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.944322] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.179946] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.376863] systemd-fstab-generator[4412]: Ignoring "noauto" option for root device
	[  +0.217576] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.698919] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.035173] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [4b2ab1b99643] <==
	{"level":"info","ts":"2024-04-01T12:40:11.4299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"834c7c4986c2683a became candidate at term 2"}
	{"level":"info","ts":"2024-04-01T12:40:11.430067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"834c7c4986c2683a received MsgVoteResp from 834c7c4986c2683a at term 2"}
	{"level":"info","ts":"2024-04-01T12:40:11.43031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"834c7c4986c2683a became leader at term 2"}
	{"level":"info","ts":"2024-04-01T12:40:11.430464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 834c7c4986c2683a elected leader 834c7c4986c2683a at term 2"}
	{"level":"info","ts":"2024-04-01T12:40:11.436674Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T12:40:11.440848Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"834c7c4986c2683a","local-member-attributes":"{Name:multinode-965600 ClientURLs:[https://172.19.154.221:2379]}","request-path":"/0/members/834c7c4986c2683a/attributes","cluster-id":"bd0abeca69179d49","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T12:40:11.443619Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T12:40:11.444438Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T12:40:11.471468Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.154.221:2379"}
	{"level":"info","ts":"2024-04-01T12:40:11.475233Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd0abeca69179d49","local-member-id":"834c7c4986c2683a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T12:40:11.480545Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T12:40:11.480939Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T12:40:11.481481Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T12:40:11.5065Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T12:40:11.519138Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-04-01T12:40:38.828132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.146914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-01T12:40:38.828251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.92696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-965600\" ","response":"range_response_count:1 size:4494"}
	{"level":"info","ts":"2024-04-01T12:40:38.828288Z","caller":"traceutil/trace.go:171","msg":"trace[635364806] range","detail":"{range_begin:/registry/minions/multinode-965600; range_end:; response_count:1; response_revision:386; }","duration":"343.99036ms","start":"2024-04-01T12:40:38.484287Z","end":"2024-04-01T12:40:38.828277Z","steps":["trace[635364806] 'range keys from in-memory index tree'  (duration: 343.839062ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T12:40:38.828919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T12:40:38.484272Z","time spent":"344.633655ms","remote":"127.0.0.1:33948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":4518,"request content":"key:\"/registry/minions/multinode-965600\" "}
	{"level":"warn","ts":"2024-04-01T12:40:38.82915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.434299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-multinode-965600\" ","response":"range_response_count:1 size:6903"}
	{"level":"info","ts":"2024-04-01T12:40:38.829173Z","caller":"traceutil/trace.go:171","msg":"trace[970642742] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-multinode-965600; range_end:; response_count:1; response_revision:386; }","duration":"222.483499ms","start":"2024-04-01T12:40:38.606682Z","end":"2024-04-01T12:40:38.829166Z","steps":["trace[970642742] 'range keys from in-memory index tree'  (duration: 222.275301ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T12:40:38.828244Z","caller":"traceutil/trace.go:171","msg":"trace[207269401] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:386; }","duration":"361.317613ms","start":"2024-04-01T12:40:38.466906Z","end":"2024-04-01T12:40:38.828224Z","steps":["trace[207269401] 'range keys from in-memory index tree'  (duration: 361.003615ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T12:40:38.830431Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T12:40:38.466889Z","time spent":"363.530993ms","remote":"127.0.0.1:33818","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-01T12:40:39.020715Z","caller":"traceutil/trace.go:171","msg":"trace[620599794] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"178.954278ms","start":"2024-04-01T12:40:38.84173Z","end":"2024-04-01T12:40:39.020684Z","steps":["trace[620599794] 'process raft request'  (duration: 178.836479ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T12:40:39.067079Z","caller":"traceutil/trace.go:171","msg":"trace[420896172] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"124.208868ms","start":"2024-04-01T12:40:38.94285Z","end":"2024-04-01T12:40:39.067059Z","steps":["trace[420896172] 'process raft request'  (duration: 90.45844ms)","trace[420896172] 'compare'  (duration: 33.46133ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:41:20 up 3 min,  0 users,  load average: 0.83, 0.45, 0.17
	Linux multinode-965600 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a2f7ee6fd8ef] <==
	I0401 12:40:39.819039       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0401 12:40:39.819147       1 main.go:107] hostIP = 172.19.154.221
	podIP = 172.19.154.221
	I0401 12:40:39.819277       1 main.go:116] setting mtu 1500 for CNI 
	I0401 12:40:39.819289       1 main.go:146] kindnetd IP family: "ipv4"
	I0401 12:40:39.819302       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0401 12:40:40.515233       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:40:40.515472       1 main.go:227] handling current node
	I0401 12:40:50.530188       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:40:50.530232       1 main.go:227] handling current node
	I0401 12:41:00.543917       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:41:00.544065       1 main.go:227] handling current node
	I0401 12:41:10.550318       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:41:10.550490       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ef878fda62c3] <==
	I0401 12:40:14.248465       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0401 12:40:14.248871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0401 12:40:14.252090       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 12:40:14.262907       1 controller.go:624] quota admission added evaluator for: namespaces
	I0401 12:40:14.294885       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 12:40:14.294915       1 aggregator.go:165] initial CRD sync complete...
	I0401 12:40:14.294922       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 12:40:14.294928       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 12:40:14.294935       1 cache.go:39] Caches are synced for autoregister controller
	I0401 12:40:14.356248       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 12:40:15.097610       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 12:40:15.107127       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 12:40:15.107228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 12:40:16.420062       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 12:40:16.508606       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 12:40:16.612974       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 12:40:16.628214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.154.221]
	I0401 12:40:16.629843       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 12:40:16.639111       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 12:40:17.171029       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 12:40:18.378900       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 12:40:18.403226       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 12:40:18.433325       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 12:40:31.074669       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0401 12:40:31.230077       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [dca4edaf448b] <==
	I0401 12:40:31.180797       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 12:40:31.211882       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0401 12:40:31.215128       1 shared_informer.go:318] Caches are synced for HPA
	I0401 12:40:31.216640       1 shared_informer.go:318] Caches are synced for endpoint
	I0401 12:40:31.244152       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 12:40:31.249678       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-vhxkq"
	I0401 12:40:31.346277       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-wbm5g"
	I0401 12:40:31.374799       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-426zj"
	I0401 12:40:31.410203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="312.376076ms"
	I0401 12:40:31.411778       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pfltb"
	I0401 12:40:31.555913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="145.606924ms"
	I0401 12:40:31.556084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="116.498µs"
	I0401 12:40:31.590097       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 12:40:31.590732       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0401 12:40:31.643786       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 12:40:40.792903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.698µs"
	I0401 12:40:40.804215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="92.3µs"
	I0401 12:40:40.824154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.5µs"
	I0401 12:40:40.843605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="267.798µs"
	I0401 12:40:41.085983       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0401 12:40:43.473222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.403µs"
	I0401 12:40:43.544358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="33.576528ms"
	I0401 12:40:43.545112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="64.601µs"
	I0401 12:40:43.621782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="29.60553ms"
	I0401 12:40:43.621885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="47.502µs"
	
	
	==> kube-proxy [613f71eef82c] <==
	I0401 12:40:32.708837       1 server_others.go:72] "Using iptables proxy"
	I0401 12:40:32.732846       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.154.221"]
	I0401 12:40:32.840480       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 12:40:32.841353       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 12:40:32.841515       1 server_others.go:168] "Using iptables Proxier"
	I0401 12:40:32.846360       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 12:40:32.846972       1 server.go:865] "Version info" version="v1.29.3"
	I0401 12:40:32.847176       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 12:40:32.849661       1 config.go:188] "Starting service config controller"
	I0401 12:40:32.849821       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 12:40:32.849850       1 config.go:97] "Starting endpoint slice config controller"
	I0401 12:40:32.849857       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 12:40:32.850966       1 config.go:315] "Starting node config controller"
	I0401 12:40:32.851069       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 12:40:32.951264       1 shared_informer.go:318] Caches are synced for node config
	I0401 12:40:32.951322       1 shared_informer.go:318] Caches are synced for service config
	I0401 12:40:32.951356       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [06e3c5e6bd4c] <==
	W0401 12:40:15.429238       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 12:40:15.429301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 12:40:15.479491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 12:40:15.479799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 12:40:15.575892       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 12:40:15.575996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0401 12:40:15.582700       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 12:40:15.582869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 12:40:15.596457       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 12:40:15.596506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 12:40:15.626558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 12:40:15.626761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 12:40:15.694771       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 12:40:15.695046       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 12:40:15.706788       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 12:40:15.706845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 12:40:15.744587       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 12:40:15.744860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 12:40:15.776916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 12:40:15.777013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 12:40:15.799367       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 12:40:15.805566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 12:40:15.862621       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 12:40:15.862761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0401 12:40:18.656856       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 12:40:31 multinode-965600 kubelet[2839]: I0401 12:40:31.560582    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1bc27de-58b6-4b36-99bd-6bdb473fc573-lib-modules\") pod \"kindnet-pfltb\" (UID: \"f1bc27de-58b6-4b36-99bd-6bdb473fc573\") " pod="kube-system/kindnet-pfltb"
	Apr 01 12:40:31 multinode-965600 kubelet[2839]: I0401 12:40:31.560619    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f1bc27de-58b6-4b36-99bd-6bdb473fc573-cni-cfg\") pod \"kindnet-pfltb\" (UID: \"f1bc27de-58b6-4b36-99bd-6bdb473fc573\") " pod="kube-system/kindnet-pfltb"
	Apr 01 12:40:31 multinode-965600 kubelet[2839]: I0401 12:40:31.560910    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1bc27de-58b6-4b36-99bd-6bdb473fc573-xtables-lock\") pod \"kindnet-pfltb\" (UID: \"f1bc27de-58b6-4b36-99bd-6bdb473fc573\") " pod="kube-system/kindnet-pfltb"
	Apr 01 12:40:31 multinode-965600 kubelet[2839]: I0401 12:40:31.560963    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/521b81d0-8859-44fb-baa6-33e43d5d5b9b-lib-modules\") pod \"kube-proxy-426zj\" (UID: \"521b81d0-8859-44fb-baa6-33e43d5d5b9b\") " pod="kube-system/kube-proxy-426zj"
	Apr 01 12:40:31 multinode-965600 kubelet[2839]: I0401 12:40:31.561001    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jdv\" (UniqueName: \"kubernetes.io/projected/f1bc27de-58b6-4b36-99bd-6bdb473fc573-kube-api-access-n6jdv\") pod \"kindnet-pfltb\" (UID: \"f1bc27de-58b6-4b36-99bd-6bdb473fc573\") " pod="kube-system/kindnet-pfltb"
	Apr 01 12:40:39 multinode-965600 kubelet[2839]: I0401 12:40:39.029727    2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-426zj" podStartSLOduration=8.029667148 podStartE2EDuration="8.029667148s" podCreationTimestamp="2024-04-01 12:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-01 12:40:32.983731267 +0000 UTC m=+14.678181867" watchObservedRunningTime="2024-04-01 12:40:39.029667148 +0000 UTC m=+20.724117648"
	Apr 01 12:40:40 multinode-965600 kubelet[2839]: I0401 12:40:40.720721    2839 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Apr 01 12:40:40 multinode-965600 kubelet[2839]: I0401 12:40:40.785827    2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-pfltb" podStartSLOduration=4.27646356 podStartE2EDuration="9.785760923s" podCreationTimestamp="2024-04-01 12:40:31 +0000 UTC" firstStartedPulling="2024-04-01 12:40:32.763378442 +0000 UTC m=+14.457828942" lastFinishedPulling="2024-04-01 12:40:38.272675805 +0000 UTC m=+19.967126305" observedRunningTime="2024-04-01 12:40:40.086673655 +0000 UTC m=+21.781124255" watchObservedRunningTime="2024-04-01 12:40:40.785760923 +0000 UTC m=+22.480211423"
	Apr 01 12:40:40 multinode-965600 kubelet[2839]: I0401 12:40:40.786455    2839 topology_manager.go:215] "Topology Admit Handler" podUID="4af8fdd8-85f7-47c9-815c-80ca21486d61" podNamespace="kube-system" podName="coredns-76f75df574-vhxkq"
	Apr 01 12:40:40 multinode-965600 kubelet[2839]: I0401 12:40:40.799943    2839 topology_manager.go:215] "Topology Admit Handler" podUID="c40f0e5d-01bc-4222-953b-b253a72a624a" podNamespace="kube-system" podName="coredns-76f75df574-wbm5g"
	Apr 01 12:40:40 multinode-965600 kubelet[2839]: I0401 12:40:40.940159    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-552ls\" (UniqueName: \"kubernetes.io/projected/c40f0e5d-01bc-4222-953b-b253a72a624a-kube-api-access-552ls\") pod \"coredns-76f75df574-wbm5g\" (UID: \"c40f0e5d-01bc-4222-953b-b253a72a624a\") " pod="kube-system/coredns-76f75df574-wbm5g"
	Apr 01 12:40:40 multinode-965600 kubelet[2839]: I0401 12:40:40.940334    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wgsp\" (UniqueName: \"kubernetes.io/projected/4af8fdd8-85f7-47c9-815c-80ca21486d61-kube-api-access-4wgsp\") pod \"coredns-76f75df574-vhxkq\" (UID: \"4af8fdd8-85f7-47c9-815c-80ca21486d61\") " pod="kube-system/coredns-76f75df574-vhxkq"
	Apr 01 12:40:40 multinode-965600 kubelet[2839]: I0401 12:40:40.940378    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4af8fdd8-85f7-47c9-815c-80ca21486d61-config-volume\") pod \"coredns-76f75df574-vhxkq\" (UID: \"4af8fdd8-85f7-47c9-815c-80ca21486d61\") " pod="kube-system/coredns-76f75df574-vhxkq"
	Apr 01 12:40:40 multinode-965600 kubelet[2839]: I0401 12:40:40.940467    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c40f0e5d-01bc-4222-953b-b253a72a624a-config-volume\") pod \"coredns-76f75df574-wbm5g\" (UID: \"c40f0e5d-01bc-4222-953b-b253a72a624a\") " pod="kube-system/coredns-76f75df574-wbm5g"
	Apr 01 12:40:41 multinode-965600 kubelet[2839]: I0401 12:40:41.492863    2839 topology_manager.go:215] "Topology Admit Handler" podUID="0d2b4875-623c-48e9-9a6c-d4a18ae61c82" podNamespace="kube-system" podName="storage-provisioner"
	Apr 01 12:40:41 multinode-965600 kubelet[2839]: I0401 12:40:41.649025    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzlq\" (UniqueName: \"kubernetes.io/projected/0d2b4875-623c-48e9-9a6c-d4a18ae61c82-kube-api-access-9wzlq\") pod \"storage-provisioner\" (UID: \"0d2b4875-623c-48e9-9a6c-d4a18ae61c82\") " pod="kube-system/storage-provisioner"
	Apr 01 12:40:41 multinode-965600 kubelet[2839]: I0401 12:40:41.649124    2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0d2b4875-623c-48e9-9a6c-d4a18ae61c82-tmp\") pod \"storage-provisioner\" (UID: \"0d2b4875-623c-48e9-9a6c-d4a18ae61c82\") " pod="kube-system/storage-provisioner"
	Apr 01 12:40:42 multinode-965600 kubelet[2839]: I0401 12:40:42.414480    2839 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d4f816d5225de88e2b9bad13f6620cfba528082ffc80dae33ad6f287ee5759e"
	Apr 01 12:40:43 multinode-965600 kubelet[2839]: I0401 12:40:43.477197    2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vhxkq" podStartSLOduration=12.476863548 podStartE2EDuration="12.476863548s" podCreationTimestamp="2024-04-01 12:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-01 12:40:43.46516526 +0000 UTC m=+25.159615760" watchObservedRunningTime="2024-04-01 12:40:43.476863548 +0000 UTC m=+25.171314048"
	Apr 01 12:40:43 multinode-965600 kubelet[2839]: I0401 12:40:43.548199    2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.548147706 podStartE2EDuration="3.548147706s" podCreationTimestamp="2024-04-01 12:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-01 12:40:43.548031404 +0000 UTC m=+25.242481904" watchObservedRunningTime="2024-04-01 12:40:43.548147706 +0000 UTC m=+25.242598306"
	Apr 01 12:41:18 multinode-965600 kubelet[2839]: E0401 12:41:18.659558    2839 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 12:41:18 multinode-965600 kubelet[2839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 12:41:18 multinode-965600 kubelet[2839]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 12:41:18 multinode-965600 kubelet[2839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 12:41:18 multinode-965600 kubelet[2839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [891f5bd08d88] <==
	I0401 12:40:42.704544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 12:40:42.717625       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 12:40:42.717935       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 12:40:42.740093       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 12:40:42.742252       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-965600_89db7636-5ee2-4db0-b079-443518af507b!
	I0401 12:40:42.742663       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c1535733-13af-4426-8790-ceb734773f61", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-965600_89db7636-5ee2-4db0-b079-443518af507b became leader
	I0401 12:40:42.843316       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-965600_89db7636-5ee2-4db0-b079-443518af507b!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:41:11.928317    9880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-965600 -n multinode-965600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-965600 -n multinode-965600: (12.8519158s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-965600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (234.48s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (517.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-965600
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-965600-m01 --driver=hyperv
E0401 12:43:23.482502    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
multinode_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-965600-m01 --driver=hyperv: (3m32.6885803s)
multinode_test.go:466: expected start profile command to fail. args "out/minikube-windows-amd64.exe start -p multinode-965600-m01 --driver=hyperv"
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-965600-m02 --driver=hyperv
E0401 12:46:26.759107    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 12:48:23.485391    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-965600-m02 --driver=hyperv: (3m32.488575s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-965600
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-965600: exit status 80 (7.9905754s)

                                                
                                                
-- stdout --
	* Adding node m02 to cluster multinode-965600 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:48:40.292338    6072 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-965600-m02 already exists in multinode-965600-m02 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_8a500d2181d400fd32bfc5983efc601de14422c3_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-965600-m02
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-965600-m02: (48.5090205s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-965600 -n multinode-965600: (13.1564328s)
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-965600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-965600 logs -n 25: (8.9447858s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:26 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- exec          | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | -- nslookup kubernetes.io            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- exec          | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | -- nslookup kubernetes.default       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600                  | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:27 UTC |                     |
	|         | -- exec  -- nslookup                 |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                      |                   |                |                     |                     |
	| kubectl | -p multinode-965600 -- get pods -o   | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:28 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                      |                   |                |                     |                     |
	| node    | add -p multinode-965600 -v 3         | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:28 UTC |                     |
	|         | --alsologtostderr                    |                      |                   |                |                     |                     |
	| node    | multinode-965600 node stop m03       | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:29 UTC |                     |
	| node    | multinode-965600 node start          | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:30 UTC |                     |
	|         | m03 -v=7 --alsologtostderr           |                      |                   |                |                     |                     |
	| node    | list -p multinode-965600             | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:31 UTC |                     |
	| stop    | -p multinode-965600                  | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:31 UTC | 01 Apr 24 12:32 UTC |
	| start   | -p multinode-965600                  | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:32 UTC |                     |
	|         | --wait=true -v=8                     |                      |                   |                |                     |                     |
	|         | --alsologtostderr                    |                      |                   |                |                     |                     |
	| node    | list -p multinode-965600             | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:35 UTC |                     |
	| node    | multinode-965600 node delete         | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:35 UTC |                     |
	|         | m03                                  |                      |                   |                |                     |                     |
	| stop    | multinode-965600 stop                | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:36 UTC | 01 Apr 24 12:37 UTC |
	| start   | -p multinode-965600                  | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:37 UTC | 01 Apr 24 12:40 UTC |
	|         | --wait=true -v=8                     |                      |                   |                |                     |                     |
	|         | --alsologtostderr                    |                      |                   |                |                     |                     |
	|         | --driver=hyperv                      |                      |                   |                |                     |                     |
	| node    | list -p multinode-965600             | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:41 UTC |                     |
	| start   | -p multinode-965600-m01              | multinode-965600-m01 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:41 UTC | 01 Apr 24 12:45 UTC |
	|         | --driver=hyperv                      |                      |                   |                |                     |                     |
	| start   | -p multinode-965600-m02              | multinode-965600-m02 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:45 UTC | 01 Apr 24 12:48 UTC |
	|         | --driver=hyperv                      |                      |                   |                |                     |                     |
	| node    | add -p multinode-965600              | multinode-965600     | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:48 UTC |                     |
	| delete  | -p multinode-965600-m02              | multinode-965600-m02 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 12:48 UTC | 01 Apr 24 12:49 UTC |
	|---------|--------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 12:45:07
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 12:45:07.892503    7408 out.go:291] Setting OutFile to fd 744 ...
	I0401 12:45:07.893515    7408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:45:07.893515    7408 out.go:304] Setting ErrFile to fd 716...
	I0401 12:45:07.893515    7408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:45:07.921329    7408 out.go:298] Setting JSON to false
	I0401 12:45:07.924980    7408 start.go:129] hostinfo: {"hostname":"minikube6","uptime":318266,"bootTime":1711657241,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 12:45:07.924980    7408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 12:45:07.932090    7408 out.go:177] * [multinode-965600-m02] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 12:45:07.937860    7408 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:45:07.936103    7408 notify.go:220] Checking for updates...
	I0401 12:45:07.940679    7408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 12:45:07.942918    7408 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 12:45:07.945481    7408 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 12:45:07.947431    7408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 12:45:07.951763    7408 config.go:182] Loaded profile config "ha-401500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:45:07.951763    7408 config.go:182] Loaded profile config "multinode-965600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:45:07.952745    7408 config.go:182] Loaded profile config "multinode-965600-m01": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:45:07.952745    7408 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 12:45:13.777991    7408 out.go:177] * Using the hyperv driver based on user configuration
	I0401 12:45:13.781960    7408 start.go:297] selected driver: hyperv
	I0401 12:45:13.781960    7408 start.go:901] validating driver "hyperv" against <nil>
	I0401 12:45:13.781960    7408 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 12:45:13.781960    7408 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 12:45:13.837134    7408 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0401 12:45:13.837798    7408 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 12:45:13.837798    7408 cni.go:84] Creating CNI manager for ""
	I0401 12:45:13.837798    7408 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 12:45:13.837798    7408 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 12:45:13.837798    7408 start.go:340] cluster config:
	{Name:multinode-965600-m02 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-965600-m02 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:45:13.838743    7408 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 12:45:13.845023    7408 out.go:177] * Starting "multinode-965600-m02" primary control-plane node in "multinode-965600-m02" cluster
	I0401 12:45:13.848406    7408 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 12:45:13.848406    7408 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 12:45:13.848406    7408 cache.go:56] Caching tarball of preloaded images
	I0401 12:45:13.848406    7408 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0401 12:45:13.848406    7408 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0401 12:45:13.848406    7408 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\config.json ...
	I0401 12:45:13.849423    7408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\config.json: {Name:mk16219949d78fde4abe1e05dbfb0a86003754f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:45:13.850420    7408 start.go:360] acquireMachinesLock for multinode-965600-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 12:45:13.850420    7408 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-965600-m02"
	I0401 12:45:13.850420    7408 start.go:93] Provisioning new machine with config: &{Name:multinode-965600-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:multinode-965600-m02 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 12:45:13.850420    7408 start.go:125] createHost starting for "" (driver="hyperv")
	I0401 12:45:13.853407    7408 out.go:204] * Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0401 12:45:13.854469    7408 start.go:159] libmachine.API.Create for "multinode-965600-m02" (driver="hyperv")
	I0401 12:45:13.854469    7408 client.go:168] LocalClient.Create starting
	I0401 12:45:13.854469    7408 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0401 12:45:13.854469    7408 main.go:141] libmachine: Decoding PEM data...
	I0401 12:45:13.854469    7408 main.go:141] libmachine: Parsing certificate...
	I0401 12:45:13.854469    7408 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0401 12:45:13.855519    7408 main.go:141] libmachine: Decoding PEM data...
	I0401 12:45:13.855519    7408 main.go:141] libmachine: Parsing certificate...
	I0401 12:45:13.855519    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0401 12:45:16.100921    7408 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0401 12:45:16.100921    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:16.100921    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0401 12:45:17.976008    7408 main.go:141] libmachine: [stdout =====>] : False
	
	I0401 12:45:17.976337    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:17.976411    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 12:45:19.598867    7408 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 12:45:19.598867    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:19.598867    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 12:45:23.609820    7408 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 12:45:23.609820    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:23.613256    7408 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 12:45:24.181293    7408 main.go:141] libmachine: Creating SSH key...
	I0401 12:45:24.488437    7408 main.go:141] libmachine: Creating VM...
	I0401 12:45:24.488437    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0401 12:45:27.627261    7408 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0401 12:45:27.627261    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:27.627379    7408 main.go:141] libmachine: Using switch "Default Switch"
	I0401 12:45:27.627511    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0401 12:45:29.529028    7408 main.go:141] libmachine: [stdout =====>] : True
	
	I0401 12:45:29.529028    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:29.529028    7408 main.go:141] libmachine: Creating VHD
	I0401 12:45:29.529028    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0401 12:45:33.513283    7408 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 18CC1933-4F87-4844-A633-83E6AD938539
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0401 12:45:33.513283    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:33.514274    7408 main.go:141] libmachine: Writing magic tar header
	I0401 12:45:33.514456    7408 main.go:141] libmachine: Writing SSH key tar header
	I0401 12:45:33.524430    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0401 12:45:36.902957    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:45:36.902957    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:36.903049    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\disk.vhd' -SizeBytes 20000MB
	I0401 12:45:39.631806    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:45:39.631806    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:39.631897    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-965600-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 6000MB
	I0401 12:45:43.811108    7408 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-965600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0401 12:45:43.811108    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:43.811210    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-965600-m02 -DynamicMemoryEnabled $false
	I0401 12:45:46.258554    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:45:46.258554    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:46.259623    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-965600-m02 -Count 2
	I0401 12:45:48.610614    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:45:48.610822    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:48.610822    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-965600-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\boot2docker.iso'
	I0401 12:45:51.406599    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:45:51.406599    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:51.406599    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-965600-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\disk.vhd'
	I0401 12:45:54.227844    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:45:54.227844    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:54.227844    7408 main.go:141] libmachine: Starting VM...
	I0401 12:45:54.227910    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-965600-m02
	I0401 12:45:57.549324    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:45:57.549361    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:57.549361    7408 main.go:141] libmachine: Waiting for host to start...
	I0401 12:45:57.549361    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:45:59.986905    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:45:59.986905    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:45:59.987289    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:02.702437    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:46:02.702437    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:03.702802    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:06.057211    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:06.057211    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:06.057211    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:08.815918    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:46:08.815918    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:09.823135    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:12.215350    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:12.215350    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:12.215350    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:14.917610    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:46:14.917610    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:15.931575    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:18.289818    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:18.289818    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:18.290406    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:20.992646    7408 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:46:20.992646    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:22.002254    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:24.337496    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:24.338038    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:24.338038    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:27.150365    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:46:27.150365    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:27.150365    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:29.477303    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:29.477303    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:29.477303    7408 machine.go:94] provisionDockerMachine start ...
	I0401 12:46:29.477427    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:31.795092    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:31.795092    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:31.795092    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:34.524744    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:46:34.524744    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:34.533836    7408 main.go:141] libmachine: Using SSH client type: native
	I0401 12:46:34.543345    7408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.48 22 <nil> <nil>}
	I0401 12:46:34.543345    7408 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 12:46:34.682046    7408 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 12:46:34.682046    7408 buildroot.go:166] provisioning hostname "multinode-965600-m02"
	I0401 12:46:34.682584    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:37.008102    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:37.008102    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:37.008308    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:39.987704    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:46:39.987704    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:39.994819    7408 main.go:141] libmachine: Using SSH client type: native
	I0401 12:46:39.995196    7408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.48 22 <nil> <nil>}
	I0401 12:46:39.995196    7408 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-965600-m02 && echo "multinode-965600-m02" | sudo tee /etc/hostname
	I0401 12:46:40.164278    7408 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-965600-m02
	
	I0401 12:46:40.164278    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:42.502607    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:42.502607    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:42.503431    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:45.271843    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:46:45.271843    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:45.278848    7408 main.go:141] libmachine: Using SSH client type: native
	I0401 12:46:45.279062    7408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.48 22 <nil> <nil>}
	I0401 12:46:45.279062    7408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-965600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-965600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-965600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 12:46:45.423094    7408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 12:46:45.423094    7408 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 12:46:45.423094    7408 buildroot.go:174] setting up certificates
	I0401 12:46:45.423094    7408 provision.go:84] configureAuth start
	I0401 12:46:45.423094    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:47.744460    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:47.744460    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:47.744910    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:50.473792    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:46:50.473792    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:50.474058    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:52.770785    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:52.770971    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:52.771026    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:46:55.463023    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:46:55.463023    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:55.463023    7408 provision.go:143] copyHostCerts
	I0401 12:46:55.463827    7408 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 12:46:55.463827    7408 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 12:46:55.463827    7408 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 12:46:55.464939    7408 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 12:46:55.464939    7408 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 12:46:55.465680    7408 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 12:46:55.466336    7408 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 12:46:55.466336    7408 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 12:46:55.467002    7408 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 12:46:55.467604    7408 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-965600-m02 san=[127.0.0.1 172.19.151.48 localhost minikube multinode-965600-m02]
	I0401 12:46:55.668709    7408 provision.go:177] copyRemoteCerts
	I0401 12:46:55.681772    7408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 12:46:55.681772    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:46:57.970060    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:46:57.970060    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:46:57.971021    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:00.783303    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:00.783303    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:00.784434    7408 sshutil.go:53] new ssh client: &{IP:172.19.151.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\id_rsa Username:docker}
	I0401 12:47:00.898480    7408 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2165966s)
	I0401 12:47:00.899142    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 12:47:00.953927    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0401 12:47:01.006230    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 12:47:01.061346    7408 provision.go:87] duration metric: took 15.6381408s to configureAuth
	I0401 12:47:01.061346    7408 buildroot.go:189] setting minikube options for container-runtime
	I0401 12:47:01.062062    7408 config.go:182] Loaded profile config "multinode-965600-m02": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:47:01.062062    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:03.352492    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:03.352492    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:03.352492    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:06.065979    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:06.065979    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:06.072256    7408 main.go:141] libmachine: Using SSH client type: native
	I0401 12:47:06.072787    7408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.48 22 <nil> <nil>}
	I0401 12:47:06.072787    7408 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 12:47:06.209458    7408 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 12:47:06.209458    7408 buildroot.go:70] root file system type: tmpfs
	I0401 12:47:06.209691    7408 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 12:47:06.209868    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:08.506726    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:08.506726    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:08.506810    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:11.290200    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:11.290277    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:11.295942    7408 main.go:141] libmachine: Using SSH client type: native
	I0401 12:47:11.296605    7408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.48 22 <nil> <nil>}
	I0401 12:47:11.296605    7408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 12:47:11.465008    7408 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 12:47:11.465008    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:13.821102    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:13.821352    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:13.821461    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:16.566158    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:16.566158    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:16.572537    7408 main.go:141] libmachine: Using SSH client type: native
	I0401 12:47:16.573381    7408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.48 22 <nil> <nil>}
	I0401 12:47:16.573381    7408 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 12:47:18.777914    7408 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 12:47:18.777914    7408 machine.go:97] duration metric: took 49.3001367s to provisionDockerMachine
	I0401 12:47:18.777914    7408 client.go:171] duration metric: took 2m4.922558s to LocalClient.Create
	I0401 12:47:18.777914    7408 start.go:167] duration metric: took 2m4.922558s to libmachine.API.Create "multinode-965600-m02"
	I0401 12:47:18.777914    7408 start.go:293] postStartSetup for "multinode-965600-m02" (driver="hyperv")
	I0401 12:47:18.777914    7408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 12:47:18.793819    7408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 12:47:18.793819    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:21.037408    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:21.037408    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:21.037408    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:23.761418    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:23.761418    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:23.762264    7408 sshutil.go:53] new ssh client: &{IP:172.19.151.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\id_rsa Username:docker}
	I0401 12:47:23.874830    7408 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0808846s)
	I0401 12:47:23.887109    7408 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 12:47:23.894621    7408 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 12:47:23.894621    7408 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 12:47:23.895080    7408 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 12:47:23.896032    7408 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 12:47:23.908005    7408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 12:47:23.928356    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 12:47:23.981872    7408 start.go:296] duration metric: took 5.2038585s for postStartSetup
	I0401 12:47:23.984567    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:26.245987    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:26.245987    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:26.246967    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:28.986946    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:28.988092    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:28.988350    7408 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\config.json ...
	I0401 12:47:28.992669    7408 start.go:128] duration metric: took 2m15.1412894s to createHost
	I0401 12:47:28.992792    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:31.305719    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:31.305719    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:31.305830    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:34.022147    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:34.022147    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:34.028827    7408 main.go:141] libmachine: Using SSH client type: native
	I0401 12:47:34.029676    7408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.48 22 <nil> <nil>}
	I0401 12:47:34.029676    7408 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 12:47:34.163691    7408 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711975654.148422833
	
	I0401 12:47:34.163691    7408 fix.go:216] guest clock: 1711975654.148422833
	I0401 12:47:34.163691    7408 fix.go:229] Guest: 2024-04-01 12:47:34.148422833 +0000 UTC Remote: 2024-04-01 12:47:28.9926697 +0000 UTC m=+141.292562301 (delta=5.155753133s)
	I0401 12:47:34.163822    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:36.495943    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:36.495943    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:36.495943    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:39.270748    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:39.270748    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:39.276669    7408 main.go:141] libmachine: Using SSH client type: native
	I0401 12:47:39.277294    7408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.151.48 22 <nil> <nil>}
	I0401 12:47:39.277294    7408 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711975654
	I0401 12:47:39.418338    7408 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 12:47:34 UTC 2024
	
	I0401 12:47:39.418338    7408 fix.go:236] clock set: Mon Apr  1 12:47:34 UTC 2024
	 (err=<nil>)
	I0401 12:47:39.418338    7408 start.go:83] releasing machines lock for "multinode-965600-m02", held for 2m25.5668845s
	I0401 12:47:39.418606    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:41.730202    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:41.730202    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:41.731253    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:44.573913    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:44.573913    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:44.581448    7408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 12:47:44.582213    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:44.592936    7408 ssh_runner.go:195] Run: cat /version.json
	I0401 12:47:44.592936    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:47:47.028033    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:47.028033    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:47.028254    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:47.029949    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:47:47.029949    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:47.030151    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:47:50.019400    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:50.019841    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:50.019910    7408 sshutil.go:53] new ssh client: &{IP:172.19.151.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\id_rsa Username:docker}
	I0401 12:47:50.050704    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:47:50.050704    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:47:50.051219    7408 sshutil.go:53] new ssh client: &{IP:172.19.151.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\id_rsa Username:docker}
	I0401 12:47:50.110544    7408 ssh_runner.go:235] Completed: cat /version.json: (5.5175688s)
	I0401 12:47:50.124729    7408 ssh_runner.go:195] Run: systemctl --version
	I0401 12:47:50.188796    7408 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.6072178s)
	I0401 12:47:50.202351    7408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 12:47:50.211406    7408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 12:47:50.224362    7408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 12:47:50.261544    7408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 12:47:50.261583    7408 start.go:494] detecting cgroup driver to use...
	I0401 12:47:50.261869    7408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:47:50.316997    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 12:47:50.350834    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 12:47:50.372882    7408 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 12:47:50.384721    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 12:47:50.418670    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:47:50.451637    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 12:47:50.489660    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:47:50.523609    7408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 12:47:50.560823    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 12:47:50.594683    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 12:47:50.627887    7408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 12:47:50.662225    7408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 12:47:50.697709    7408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 12:47:50.733415    7408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:47:50.962637    7408 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 12:47:50.996364    7408 start.go:494] detecting cgroup driver to use...
	I0401 12:47:51.013252    7408 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 12:47:51.054024    7408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:47:51.099926    7408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 12:47:51.148420    7408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:47:51.193348    7408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:47:51.234915    7408 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 12:47:51.310517    7408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:47:51.337454    7408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:47:51.391303    7408 ssh_runner.go:195] Run: which cri-dockerd
	I0401 12:47:51.410937    7408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 12:47:51.432062    7408 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 12:47:51.479944    7408 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 12:47:51.697422    7408 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 12:47:51.905979    7408 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 12:47:51.906702    7408 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 12:47:51.962245    7408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:47:52.184915    7408 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 12:47:54.744539    7408 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5596065s)
	I0401 12:47:54.756551    7408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0401 12:47:54.796387    7408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 12:47:54.840389    7408 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0401 12:47:55.075120    7408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0401 12:47:55.301090    7408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:47:55.534380    7408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0401 12:47:55.580544    7408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0401 12:47:55.621915    7408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:47:55.835141    7408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0401 12:47:55.950327    7408 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0401 12:47:55.961883    7408 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0401 12:47:55.972113    7408 start.go:562] Will wait 60s for crictl version
	I0401 12:47:55.983648    7408 ssh_runner.go:195] Run: which crictl
	I0401 12:47:56.002717    7408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 12:47:56.085874    7408 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0401 12:47:56.095360    7408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 12:47:56.145118    7408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0401 12:47:56.188664    7408 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0401 12:47:56.188813    7408 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0401 12:47:56.192839    7408 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0401 12:47:56.192839    7408 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0401 12:47:56.192839    7408 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0401 12:47:56.192839    7408 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:3d:46:6d Flags:up|broadcast|multicast|running}
	I0401 12:47:56.196218    7408 ip.go:210] interface addr: fe80::50c5:9f3c:a843:1adb/64
	I0401 12:47:56.196218    7408 ip.go:210] interface addr: 172.19.144.1/20
	I0401 12:47:56.207129    7408 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0401 12:47:56.214567    7408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 12:47:56.241463    7408 kubeadm.go:877] updating cluster {Name:multinode-965600-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.3 ClusterName:multinode-965600-m02 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.151.48 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 12:47:56.241777    7408 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 12:47:56.252792    7408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 12:47:56.279089    7408 docker.go:685] Got preloaded images: 
	I0401 12:47:56.279089    7408 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0401 12:47:56.292178    7408 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 12:47:56.333457    7408 ssh_runner.go:195] Run: which lz4
	I0401 12:47:56.352697    7408 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 12:47:56.361360    7408 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 12:47:56.361494    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0401 12:47:58.415689    7408 docker.go:649] duration metric: took 2.0754392s to copy over tarball
	I0401 12:47:58.427682    7408 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 12:48:02.954467    7408 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.5267532s)
	I0401 12:48:02.954467    7408 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 12:48:03.020994    7408 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0401 12:48:03.042350    7408 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0401 12:48:03.091082    7408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:48:03.320569    7408 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 12:48:11.473727    7408 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.1530994s)
	I0401 12:48:11.484105    7408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0401 12:48:11.513845    7408 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0401 12:48:11.513845    7408 cache_images.go:84] Images are preloaded, skipping loading
	I0401 12:48:11.513845    7408 kubeadm.go:928] updating node { 172.19.151.48 8443 v1.29.3 docker true true} ...
	I0401 12:48:11.514090    7408 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-965600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.151.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-965600-m02 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 12:48:11.524481    7408 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0401 12:48:11.569638    7408 cni.go:84] Creating CNI manager for ""
	I0401 12:48:11.569638    7408 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 12:48:11.569638    7408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 12:48:11.569638    7408 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.151.48 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-965600-m02 NodeName:multinode-965600-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.151.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.151.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 12:48:11.569638    7408 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.151.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-965600-m02"
	  kubeletExtraArgs:
	    node-ip: 172.19.151.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.151.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 12:48:11.588227    7408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 12:48:11.611193    7408 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 12:48:11.624901    7408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 12:48:11.647843    7408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0401 12:48:11.687902    7408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 12:48:11.724506    7408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0401 12:48:11.776904    7408 ssh_runner.go:195] Run: grep 172.19.151.48	control-plane.minikube.internal$ /etc/hosts
	I0401 12:48:11.785674    7408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.151.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 12:48:11.825611    7408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:48:12.041395    7408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 12:48:12.075121    7408 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02 for IP: 172.19.151.48
	I0401 12:48:12.075255    7408 certs.go:194] generating shared ca certs ...
	I0401 12:48:12.075298    7408 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:12.076061    7408 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0401 12:48:12.076454    7408 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0401 12:48:12.076507    7408 certs.go:256] generating profile certs ...
	I0401 12:48:12.076507    7408 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\client.key
	I0401 12:48:12.077166    7408 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\client.crt with IP's: []
	I0401 12:48:12.264538    7408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\client.crt ...
	I0401 12:48:12.264538    7408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\client.crt: {Name:mk78ab77ba0a41c6e3b0a12fd9d1ed8efcc88e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:12.266551    7408 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\client.key ...
	I0401 12:48:12.266551    7408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\client.key: {Name:mk09579acc231f60f9c36b688f5457399a44fb55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:12.267591    7408 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.key.46718225
	I0401 12:48:12.267591    7408 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.crt.46718225 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.151.48]
	I0401 12:48:12.519733    7408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.crt.46718225 ...
	I0401 12:48:12.519733    7408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.crt.46718225: {Name:mk0b787105299eaf0ee48f2f5f5b4617095756dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:12.522662    7408 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.key.46718225 ...
	I0401 12:48:12.522662    7408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.key.46718225: {Name:mk70ba4802efcfd2fc7b7e22cf545968b834759b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:12.522662    7408 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.crt.46718225 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.crt
	I0401 12:48:12.535817    7408 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.key.46718225 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.key
	I0401 12:48:12.536688    7408 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\proxy-client.key
	I0401 12:48:12.536688    7408 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\proxy-client.crt with IP's: []
	I0401 12:48:12.772923    7408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\proxy-client.crt ...
	I0401 12:48:12.773789    7408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\proxy-client.crt: {Name:mk60f64daf1028bb6ccde4e6796eb2ba32bcba77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:12.774774    7408 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\proxy-client.key ...
	I0401 12:48:12.774774    7408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\proxy-client.key: {Name:mk5c6b7fb6ab62994664adb71df2cdb2347a0949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:12.785787    7408 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem (1338 bytes)
	W0401 12:48:12.786764    7408 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260_empty.pem, impossibly tiny 0 bytes
	I0401 12:48:12.786764    7408 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0401 12:48:12.786764    7408 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0401 12:48:12.786764    7408 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0401 12:48:12.786764    7408 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0401 12:48:12.788005    7408 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem (1708 bytes)
	I0401 12:48:12.789475    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 12:48:12.837264    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 12:48:12.882471    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 12:48:12.931927    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 12:48:12.982039    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 12:48:13.032397    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 12:48:13.087401    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 12:48:13.142976    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-965600-m02\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 12:48:13.198616    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\1260.pem --> /usr/share/ca-certificates/1260.pem (1338 bytes)
	I0401 12:48:13.255090    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /usr/share/ca-certificates/12602.pem (1708 bytes)
	I0401 12:48:13.300800    7408 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 12:48:13.367809    7408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 12:48:13.417577    7408 ssh_runner.go:195] Run: openssl version
	I0401 12:48:13.443043    7408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0401 12:48:13.476107    7408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0401 12:48:13.483828    7408 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:39 /usr/share/ca-certificates/12602.pem
	I0401 12:48:13.496130    7408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0401 12:48:13.519540    7408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 12:48:13.556656    7408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 12:48:13.589649    7408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 12:48:13.598384    7408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0401 12:48:13.611242    7408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 12:48:13.635716    7408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 12:48:13.670394    7408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1260.pem && ln -fs /usr/share/ca-certificates/1260.pem /etc/ssl/certs/1260.pem"
	I0401 12:48:13.704051    7408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1260.pem
	I0401 12:48:13.711500    7408 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:39 /usr/share/ca-certificates/1260.pem
	I0401 12:48:13.724025    7408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1260.pem
	I0401 12:48:13.745613    7408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1260.pem /etc/ssl/certs/51391683.0"
	I0401 12:48:13.780365    7408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 12:48:13.787608    7408 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 12:48:13.787608    7408 kubeadm.go:391] StartCluster: {Name:multinode-965600-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:multinode-965600-m02 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.151.48 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:48:13.798205    7408 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0401 12:48:13.838331    7408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 12:48:13.871829    7408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 12:48:13.912241    7408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 12:48:13.930983    7408 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 12:48:13.930983    7408 kubeadm.go:156] found existing configuration files:
	
	I0401 12:48:13.942557    7408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 12:48:13.959661    7408 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 12:48:13.972620    7408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 12:48:14.006197    7408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 12:48:14.024809    7408 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 12:48:14.035780    7408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 12:48:14.066775    7408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 12:48:14.086786    7408 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 12:48:14.100763    7408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 12:48:14.135888    7408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 12:48:14.155388    7408 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 12:48:14.168726    7408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 12:48:14.190294    7408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 12:48:14.287829    7408 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 12:48:14.288135    7408 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 12:48:14.533779    7408 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 12:48:14.533970    7408 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 12:48:14.534032    7408 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 12:48:15.019442    7408 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 12:48:15.023415    7408 out.go:204]   - Generating certificates and keys ...
	I0401 12:48:15.023415    7408 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 12:48:15.023415    7408 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 12:48:15.292805    7408 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 12:48:15.444567    7408 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 12:48:15.752756    7408 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 12:48:15.953253    7408 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 12:48:16.511369    7408 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 12:48:16.513151    7408 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-965600-m02] and IPs [172.19.151.48 127.0.0.1 ::1]
	I0401 12:48:16.776788    7408 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 12:48:16.776788    7408 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-965600-m02] and IPs [172.19.151.48 127.0.0.1 ::1]
	I0401 12:48:16.933929    7408 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 12:48:17.318297    7408 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 12:48:17.635281    7408 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 12:48:17.635939    7408 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 12:48:17.873180    7408 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 12:48:18.018177    7408 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 12:48:18.116285    7408 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 12:48:18.229499    7408 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 12:48:18.414498    7408 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 12:48:18.415321    7408 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 12:48:18.418602    7408 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 12:48:18.421569    7408 out.go:204]   - Booting up control plane ...
	I0401 12:48:18.421656    7408 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 12:48:18.421980    7408 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 12:48:18.422167    7408 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 12:48:18.452406    7408 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 12:48:18.452406    7408 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 12:48:18.452406    7408 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 12:48:18.692905    7408 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 12:48:27.196947    7408 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.503757 seconds
	I0401 12:48:27.229928    7408 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 12:48:27.276015    7408 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 12:48:27.833195    7408 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 12:48:27.833649    7408 kubeadm.go:309] [mark-control-plane] Marking the node multinode-965600-m02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 12:48:28.359860    7408 kubeadm.go:309] [bootstrap-token] Using token: ps855r.emd7egqu3fqsfudv
	I0401 12:48:28.362136    7408 out.go:204]   - Configuring RBAC rules ...
	I0401 12:48:28.362537    7408 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 12:48:28.379893    7408 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 12:48:28.408077    7408 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 12:48:28.418257    7408 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 12:48:28.425184    7408 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 12:48:28.434481    7408 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 12:48:28.465571    7408 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 12:48:28.886052    7408 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 12:48:28.964856    7408 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 12:48:28.966851    7408 kubeadm.go:309] 
	I0401 12:48:28.967486    7408 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 12:48:28.967486    7408 kubeadm.go:309] 
	I0401 12:48:28.967669    7408 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 12:48:28.967669    7408 kubeadm.go:309] 
	I0401 12:48:28.967761    7408 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 12:48:28.967986    7408 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 12:48:28.968061    7408 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 12:48:28.968061    7408 kubeadm.go:309] 
	I0401 12:48:28.968309    7408 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 12:48:28.968309    7408 kubeadm.go:309] 
	I0401 12:48:28.968478    7408 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 12:48:28.968478    7408 kubeadm.go:309] 
	I0401 12:48:28.968605    7408 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 12:48:28.968790    7408 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 12:48:28.968938    7408 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 12:48:28.968938    7408 kubeadm.go:309] 
	I0401 12:48:28.969385    7408 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 12:48:28.969734    7408 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 12:48:28.969734    7408 kubeadm.go:309] 
	I0401 12:48:28.970029    7408 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ps855r.emd7egqu3fqsfudv \
	I0401 12:48:28.970315    7408 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c \
	I0401 12:48:28.970379    7408 kubeadm.go:309] 	--control-plane 
	I0401 12:48:28.970379    7408 kubeadm.go:309] 
	I0401 12:48:28.970680    7408 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 12:48:28.970680    7408 kubeadm.go:309] 
	I0401 12:48:28.971007    7408 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ps855r.emd7egqu3fqsfudv \
	I0401 12:48:28.971200    7408 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76e09ca42d339a2dcc4fa3c378c993c0e388eeefc130e874fb793b3ff928911c 
	I0401 12:48:28.977939    7408 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 12:48:28.977939    7408 cni.go:84] Creating CNI manager for ""
	I0401 12:48:28.977939    7408 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 12:48:28.979952    7408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 12:48:28.996945    7408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 12:48:29.016949    7408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 12:48:29.085156    7408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 12:48:29.099887    7408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 12:48:29.101878    7408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-965600-m02 minikube.k8s.io/updated_at=2024_04_01T12_48_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=multinode-965600-m02 minikube.k8s.io/primary=true
	I0401 12:48:29.127580    7408 ops.go:34] apiserver oom_adj: -16
	I0401 12:48:29.506448    7408 kubeadm.go:1107] duration metric: took 421.2888ms to wait for elevateKubeSystemPrivileges
	W0401 12:48:29.552358    7408 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 12:48:29.552358    7408 kubeadm.go:393] duration metric: took 15.7646375s to StartCluster
	I0401 12:48:29.552358    7408 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:29.552579    7408 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:48:29.555546    7408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 12:48:29.557006    7408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 12:48:29.557301    7408 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.151.48 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0401 12:48:29.559868    7408 out.go:177] * Verifying Kubernetes components...
	I0401 12:48:29.557220    7408 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 12:48:29.557365    7408 config.go:182] Loaded profile config "multinode-965600-m02": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0401 12:48:29.562747    7408 addons.go:69] Setting storage-provisioner=true in profile "multinode-965600-m02"
	I0401 12:48:29.562747    7408 addons.go:69] Setting default-storageclass=true in profile "multinode-965600-m02"
	I0401 12:48:29.562747    7408 addons.go:234] Setting addon storage-provisioner=true in "multinode-965600-m02"
	I0401 12:48:29.562747    7408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-965600-m02"
	I0401 12:48:29.562747    7408 host.go:66] Checking if "multinode-965600-m02" exists ...
	I0401 12:48:29.563602    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:48:29.564595    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:48:29.580335    7408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:48:29.815494    7408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 12:48:29.938325    7408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 12:48:30.465595    7408 start.go:946] {"host.minikube.internal": 172.19.144.1} host record injected into CoreDNS's ConfigMap
	I0401 12:48:30.470050    7408 api_server.go:52] waiting for apiserver process to appear ...
	I0401 12:48:30.482822    7408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 12:48:30.520395    7408 api_server.go:72] duration metric: took 963.0867ms to wait for apiserver process to appear ...
	I0401 12:48:30.520395    7408 api_server.go:88] waiting for apiserver healthz status ...
	I0401 12:48:30.520395    7408 api_server.go:253] Checking apiserver healthz at https://172.19.151.48:8443/healthz ...
	I0401 12:48:30.532817    7408 api_server.go:279] https://172.19.151.48:8443/healthz returned 200:
	ok
	I0401 12:48:30.536277    7408 api_server.go:141] control plane version: v1.29.3
	I0401 12:48:30.536277    7408 api_server.go:131] duration metric: took 15.8818ms to wait for apiserver health ...
	I0401 12:48:30.536277    7408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 12:48:30.547583    7408 system_pods.go:59] 4 kube-system pods found
	I0401 12:48:30.547583    7408 system_pods.go:61] "etcd-multinode-965600-m02" [f7e97deb-9032-4e07-9650-51b18d7326eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 12:48:30.547583    7408 system_pods.go:61] "kube-apiserver-multinode-965600-m02" [da0ab686-1bc5-403c-94c8-8b4e3464ae92] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 12:48:30.547583    7408 system_pods.go:61] "kube-controller-manager-multinode-965600-m02" [c4244de6-33c4-4d4b-b30b-6371c310d2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 12:48:30.547583    7408 system_pods.go:61] "kube-scheduler-multinode-965600-m02" [ca234d91-3a70-4b6e-bf69-6aac2cc5829a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 12:48:30.547583    7408 system_pods.go:74] duration metric: took 11.3058ms to wait for pod list to return data ...
	I0401 12:48:30.547583    7408 kubeadm.go:576] duration metric: took 990.2743ms to wait for: map[apiserver:true system_pods:true]
	I0401 12:48:30.547583    7408 node_conditions.go:102] verifying NodePressure condition ...
	I0401 12:48:30.557391    7408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 12:48:30.557391    7408 node_conditions.go:123] node cpu capacity is 2
	I0401 12:48:30.557391    7408 node_conditions.go:105] duration metric: took 9.8081ms to run NodePressure ...
	I0401 12:48:30.557391    7408 start.go:240] waiting for startup goroutines ...
	I0401 12:48:30.981820    7408 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-965600-m02" context rescaled to 1 replicas
	I0401 12:48:31.974970    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:48:31.974970    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:48:31.975070    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:48:31.975070    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:48:31.977764    7408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 12:48:31.977764    7408 addons.go:234] Setting addon default-storageclass=true in "multinode-965600-m02"
	I0401 12:48:31.980298    7408 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 12:48:31.980298    7408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 12:48:31.980298    7408 host.go:66] Checking if "multinode-965600-m02" exists ...
	I0401 12:48:31.980298    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:48:31.981150    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:48:34.358701    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:48:34.358701    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:48:34.358701    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:48:34.398624    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:48:34.398624    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:48:34.399432    7408 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 12:48:34.399432    7408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 12:48:34.399432    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-965600-m02 ).state
	I0401 12:48:36.762661    7408 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:48:36.762661    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:48:36.762661    7408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-965600-m02 ).networkadapters[0]).ipaddresses[0]
	I0401 12:48:37.221014    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:48:37.221014    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:48:37.221699    7408 sshutil.go:53] new ssh client: &{IP:172.19.151.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\id_rsa Username:docker}
	I0401 12:48:37.379174    7408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 12:48:39.547599    7408 main.go:141] libmachine: [stdout =====>] : 172.19.151.48
	
	I0401 12:48:39.547780    7408 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:48:39.548500    7408 sshutil.go:53] new ssh client: &{IP:172.19.151.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-965600-m02\id_rsa Username:docker}
	I0401 12:48:39.724472    7408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 12:48:39.910540    7408 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 12:48:39.913824    7408 addons.go:505] duration metric: took 10.3565304s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 12:48:39.913924    7408 start.go:245] waiting for cluster config update ...
	I0401 12:48:39.913924    7408 start.go:254] writing updated cluster config ...
	I0401 12:48:39.926706    7408 ssh_runner.go:195] Run: rm -f paused
	I0401 12:48:40.105556    7408 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 12:48:40.111120    7408 out.go:177] * Done! kubectl is now configured to use "multinode-965600-m02" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.950804015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.950819215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:41 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:41.953925199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.044358560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.044689768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.044832171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.045317284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.213955306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.214218312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.214321515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.214607922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 cri-dockerd[1236]: time="2024-04-01T12:40:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4d4f816d5225de88e2b9bad13f6620cfba528082ffc80dae33ad6f287ee5759e/resolv.conf as [nameserver 172.19.144.1]"
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.607110949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.609646013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.609796917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:40:42 multinode-965600 dockerd[1348]: time="2024-04-01T12:40:42.610127625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 01 12:41:20 multinode-965600 dockerd[1342]: 2024/04/01 12:41:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	891f5bd08d881       6e38f40d628db                                                                              9 minutes ago       Running             storage-provisioner       0                   4d4f816d5225d       storage-provisioner
	22dd1fb8d37bf       cbb01a7bd410d                                                                              9 minutes ago       Running             coredns                   0                   8c3e5fcb7cd04       coredns-76f75df574-wbm5g
	54e585be3aa8a       cbb01a7bd410d                                                                              9 minutes ago       Running             coredns                   0                   68baba24d8cad       coredns-76f75df574-vhxkq
	a2f7ee6fd8ef6       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988   9 minutes ago       Running             kindnet-cni               0                   b6d0f9166fdae       kindnet-pfltb
	613f71eef82cf       a1d263b5dc5b0                                                                              9 minutes ago       Running             kube-proxy                0                   04e418a4ad3e0       kube-proxy-426zj
	06e3c5e6bd4c9       8c390d98f50c0                                                                              9 minutes ago       Running             kube-scheduler            0                   c44326475a9ec       kube-scheduler-multinode-965600
	dca4edaf448be       6052a25da3f97                                                                              9 minutes ago       Running             kube-controller-manager   0                   a717fef2af390       kube-controller-manager-multinode-965600
	ef878fda62c3c       39f995c9f1996                                                                              9 minutes ago       Running             kube-apiserver            0                   b7c0d239b48c6       kube-apiserver-multinode-965600
	4b2ab1b996436       3861cfcd7c04c                                                                              9 minutes ago       Running             etcd                      0                   d24092c77cc2f       etcd-multinode-965600
	
	
	==> coredns [22dd1fb8d37b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [54e585be3aa8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               multinode-965600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-965600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=multinode-965600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T12_40_19_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 12:40:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-965600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 12:49:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 12:45:55 +0000   Mon, 01 Apr 2024 12:40:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 12:45:55 +0000   Mon, 01 Apr 2024 12:40:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 12:45:55 +0000   Mon, 01 Apr 2024 12:40:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 12:45:55 +0000   Mon, 01 Apr 2024 12:40:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.154.221
	  Hostname:    multinode-965600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7bf740c4c3e4623af73d139ade1038b
	  System UUID:                5de17ba9-5551-ee4e-8bca-5a015d97e7a1
	  Boot ID:                    e629ae01-f590-4fc1-841d-eb6d8fae7343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-vhxkq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m27s
	  kube-system                 coredns-76f75df574-wbm5g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m27s
	  kube-system                 etcd-multinode-965600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m40s
	  kube-system                 kindnet-pfltb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m27s
	  kube-system                 kube-apiserver-multinode-965600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  kube-system                 kube-controller-manager-multinode-965600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  kube-system                 kube-proxy-426zj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                 kube-scheduler-multinode-965600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m25s                  kube-proxy       
	  Normal  Starting                 9m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m49s (x6 over 9m49s)  kubelet          Node multinode-965600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m49s (x5 over 9m49s)  kubelet          Node multinode-965600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m49s (x5 over 9m49s)  kubelet          Node multinode-965600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m40s                  kubelet          Node multinode-965600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m40s                  kubelet          Node multinode-965600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m40s                  kubelet          Node multinode-965600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m27s                  node-controller  Node multinode-965600 event: Registered Node multinode-965600 in Controller
	  Normal  NodeReady                9m18s                  kubelet          Node multinode-965600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 1 12:39] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.107873] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.083636] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[ +28.153562] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.113276] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.613126] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.225954] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.243774] systemd-fstab-generator[1019]: Ignoring "noauto" option for root device
	[  +2.877747] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.218030] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.220297] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.308078] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[ +12.256398] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +0.140340] kauditd_printk_skb: 205 callbacks suppressed
	[Apr 1 12:40] systemd-fstab-generator[1537]: Ignoring "noauto" option for root device
	[  +6.961001] systemd-fstab-generator[1803]: Ignoring "noauto" option for root device
	[  +0.111266] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.944322] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.179946] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.376863] systemd-fstab-generator[4412]: Ignoring "noauto" option for root device
	[  +0.217576] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.698919] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.035173] kauditd_printk_skb: 33 callbacks suppressed
	[Apr 1 12:45] hrtimer: interrupt took 827205 ns
	
	
	==> etcd [4b2ab1b99643] <==
	{"level":"info","ts":"2024-04-01T12:49:23.929736Z","caller":"traceutil/trace.go:171","msg":"trace[1696494495] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"243.237956ms","start":"2024-04-01T12:49:23.686479Z","end":"2024-04-01T12:49:23.929717Z","steps":["trace[1696494495] 'process raft request'  (duration: 243.083355ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T12:49:26.126861Z","caller":"traceutil/trace.go:171","msg":"trace[1473342070] transaction","detail":"{read_only:false; response_revision:856; number_of_response:1; }","duration":"185.835022ms","start":"2024-04-01T12:49:25.941005Z","end":"2024-04-01T12:49:26.12684Z","steps":["trace[1473342070] 'process raft request'  (duration: 185.678722ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T12:49:26.575306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.910747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T12:49:26.575425Z","caller":"traceutil/trace.go:171","msg":"trace[1755187683] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:856; }","duration":"109.032147ms","start":"2024-04-01T12:49:26.46634Z","end":"2024-04-01T12:49:26.575372Z","steps":["trace[1755187683] 'range keys from in-memory index tree'  (duration: 108.730147ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T12:49:27.536069Z","caller":"traceutil/trace.go:171","msg":"trace[671656656] transaction","detail":"{read_only:false; response_revision:858; number_of_response:1; }","duration":"113.135557ms","start":"2024-04-01T12:49:27.422913Z","end":"2024-04-01T12:49:27.536049Z","steps":["trace[671656656] 'process raft request'  (duration: 113.000657ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T12:49:28.265532Z","caller":"traceutil/trace.go:171","msg":"trace[1232797899] transaction","detail":"{read_only:false; response_revision:859; number_of_response:1; }","duration":"128.602391ms","start":"2024-04-01T12:49:28.136907Z","end":"2024-04-01T12:49:28.26551Z","steps":["trace[1232797899] 'process raft request'  (duration: 128.12999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T12:49:28.673001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.360069ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T12:49:28.673819Z","caller":"traceutil/trace.go:171","msg":"trace[735493276] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:859; }","duration":"208.190171ms","start":"2024-04-01T12:49:28.465567Z","end":"2024-04-01T12:49:28.673757Z","steps":["trace[735493276] 'range keys from in-memory index tree'  (duration: 207.283669ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T12:49:30.581636Z","caller":"traceutil/trace.go:171","msg":"trace[181502446] transaction","detail":"{read_only:false; response_revision:860; number_of_response:1; }","duration":"305.016087ms","start":"2024-04-01T12:49:30.276581Z","end":"2024-04-01T12:49:30.581596Z","steps":["trace[181502446] 'process raft request'  (duration: 304.846987ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T12:49:30.582177Z","caller":"traceutil/trace.go:171","msg":"trace[239535000] linearizableReadLoop","detail":"{readStateIndex:980; appliedIndex:980; }","duration":"116.354062ms","start":"2024-04-01T12:49:30.465809Z","end":"2024-04-01T12:49:30.582163Z","steps":["trace[239535000] 'read index received'  (duration: 116.346362ms)","trace[239535000] 'applied index is now lower than readState.Index'  (duration: 6.4µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T12:49:30.58254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.717163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T12:49:30.582626Z","caller":"traceutil/trace.go:171","msg":"trace[887506913] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:860; }","duration":"116.835063ms","start":"2024-04-01T12:49:30.465778Z","end":"2024-04-01T12:49:30.582613Z","steps":["trace[887506913] 'agreement among raft nodes before linearized reading'  (duration: 116.583463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T12:49:30.584083Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T12:49:30.276565Z","time spent":"305.342388ms","remote":"127.0.0.1:33936","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:859 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-01T12:49:30.928284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.355673ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7510472119270936885 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-965600\" mod_revision:853 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-965600\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-965600\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-01T12:49:30.928701Z","caller":"traceutil/trace.go:171","msg":"trace[1551234832] linearizableReadLoop","detail":"{readStateIndex:981; appliedIndex:980; }","duration":"344.686077ms","start":"2024-04-01T12:49:30.583994Z","end":"2024-04-01T12:49:30.92868Z","steps":["trace[1551234832] 'read index received'  (duration: 45.821703ms)","trace[1551234832] 'applied index is now lower than readState.Index'  (duration: 298.863174ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T12:49:30.928963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.961677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T12:49:30.929755Z","caller":"traceutil/trace.go:171","msg":"trace[511211223] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:861; }","duration":"345.81228ms","start":"2024-04-01T12:49:30.583933Z","end":"2024-04-01T12:49:30.929745Z","steps":["trace[511211223] 'agreement among raft nodes before linearized reading'  (duration: 344.998878ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T12:49:30.929817Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T12:49:30.583902Z","time spent":"345.90148ms","remote":"127.0.0.1:33824","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-01T12:49:30.929267Z","caller":"traceutil/trace.go:171","msg":"trace[1722757246] transaction","detail":"{read_only:false; response_revision:861; number_of_response:1; }","duration":"394.402189ms","start":"2024-04-01T12:49:30.53485Z","end":"2024-04-01T12:49:30.929252Z","steps":["trace[1722757246] 'process raft request'  (duration: 95.011314ms)","trace[1722757246] 'compare'  (duration: 297.900872ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T12:49:30.930005Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T12:49:30.534828Z","time spent":"395.135791ms","remote":"127.0.0.1:34058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":553,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-965600\" mod_revision:853 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-965600\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-965600\" > >"}
	{"level":"warn","ts":"2024-04-01T12:49:31.375232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.922979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T12:49:31.375517Z","caller":"traceutil/trace.go:171","msg":"trace[1072106295] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:861; }","duration":"213.242979ms","start":"2024-04-01T12:49:31.162258Z","end":"2024-04-01T12:49:31.375501Z","steps":["trace[1072106295] 'count revisions from in-memory index tree'  (duration: 212.855079ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T12:49:31.375341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.979438ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4522"}
	{"level":"info","ts":"2024-04-01T12:49:31.376439Z","caller":"traceutil/trace.go:171","msg":"trace[720179482] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:861; }","duration":"240.01614ms","start":"2024-04-01T12:49:31.136322Z","end":"2024-04-01T12:49:31.376338Z","steps":["trace[720179482] 'range keys from in-memory index tree'  (duration: 238.735737ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T12:49:32.427844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.518767ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7510472119270936891 > lease_revoke:<id:683a8e99ac9e9100>","response":"size:29"}
	
	
	==> kernel <==
	 12:49:58 up 11 min,  0 users,  load average: 0.59, 0.35, 0.22
	Linux multinode-965600 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a2f7ee6fd8ef] <==
	I0401 12:47:51.013608       1 main.go:227] handling current node
	I0401 12:48:01.026660       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:48:01.026728       1 main.go:227] handling current node
	I0401 12:48:11.043223       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:48:11.043350       1 main.go:227] handling current node
	I0401 12:48:21.058982       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:48:21.059066       1 main.go:227] handling current node
	I0401 12:48:31.075338       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:48:31.075500       1 main.go:227] handling current node
	I0401 12:48:41.082601       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:48:41.082704       1 main.go:227] handling current node
	I0401 12:48:51.096588       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:48:51.096724       1 main.go:227] handling current node
	I0401 12:49:01.105697       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:49:01.105727       1 main.go:227] handling current node
	I0401 12:49:11.118741       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:49:11.118788       1 main.go:227] handling current node
	I0401 12:49:21.131901       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:49:21.131999       1 main.go:227] handling current node
	I0401 12:49:31.379318       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:49:31.379485       1 main.go:227] handling current node
	I0401 12:49:41.385964       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:49:41.386414       1 main.go:227] handling current node
	I0401 12:49:51.401435       1 main.go:223] Handling node with IPs: map[172.19.154.221:{}]
	I0401 12:49:51.401541       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ef878fda62c3] <==
	I0401 12:40:14.356248       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 12:40:15.097610       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 12:40:15.107127       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 12:40:15.107228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 12:40:16.420062       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 12:40:16.508606       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 12:40:16.612974       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 12:40:16.628214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.154.221]
	I0401 12:40:16.629843       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 12:40:16.639111       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 12:40:17.171029       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 12:40:18.378900       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 12:40:18.403226       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 12:40:18.433325       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 12:40:31.074669       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0401 12:40:31.230077       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0401 12:44:34.427982       1 trace.go:236] Trace[682407267]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:af3a4197-d962-438b-b5e5-6067dd5c32cf,client:172.19.154.221,api-group:coordination.k8s.io,api-version:v1,name:multinode-965600,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-965600,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:PUT (01-Apr-2024 12:44:33.591) (total time: 836ms):
	Trace[682407267]: ["GuaranteedUpdate etcd3" audit-id:af3a4197-d962-438b-b5e5-6067dd5c32cf,key:/leases/kube-node-lease/multinode-965600,type:*coordination.Lease,resource:leases.coordination.k8s.io 836ms (12:44:33.591)
	Trace[682407267]:  ---"Txn call completed" 835ms (12:44:34.427)]
	Trace[682407267]: [836.357238ms] [836.357238ms] END
	I0401 12:48:07.428247       1 trace.go:236] Trace[1325675596]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.154.221,type:*v1.Endpoints,resource:apiServerIPInfo (01-Apr-2024 12:48:06.627) (total time: 800ms):
	Trace[1325675596]: ---"initial value restored" 140ms (12:48:06.767)
	Trace[1325675596]: ---"Transaction prepared" 203ms (12:48:06.971)
	Trace[1325675596]: ---"Txn call completed" 456ms (12:48:07.428)
	Trace[1325675596]: [800.589822ms] [800.589822ms] END
	
	
	==> kube-controller-manager [dca4edaf448b] <==
	I0401 12:40:31.180797       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 12:40:31.211882       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0401 12:40:31.215128       1 shared_informer.go:318] Caches are synced for HPA
	I0401 12:40:31.216640       1 shared_informer.go:318] Caches are synced for endpoint
	I0401 12:40:31.244152       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 12:40:31.249678       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-vhxkq"
	I0401 12:40:31.346277       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-wbm5g"
	I0401 12:40:31.374799       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-426zj"
	I0401 12:40:31.410203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="312.376076ms"
	I0401 12:40:31.411778       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pfltb"
	I0401 12:40:31.555913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="145.606924ms"
	I0401 12:40:31.556084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="116.498µs"
	I0401 12:40:31.590097       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 12:40:31.590732       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0401 12:40:31.643786       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 12:40:40.792903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.698µs"
	I0401 12:40:40.804215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="92.3µs"
	I0401 12:40:40.824154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.5µs"
	I0401 12:40:40.843605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="267.798µs"
	I0401 12:40:41.085983       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0401 12:40:43.473222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.403µs"
	I0401 12:40:43.544358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="33.576528ms"
	I0401 12:40:43.545112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="64.601µs"
	I0401 12:40:43.621782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="29.60553ms"
	I0401 12:40:43.621885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="47.502µs"
	
	
	==> kube-proxy [613f71eef82c] <==
	I0401 12:40:32.708837       1 server_others.go:72] "Using iptables proxy"
	I0401 12:40:32.732846       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.154.221"]
	I0401 12:40:32.840480       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 12:40:32.841353       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 12:40:32.841515       1 server_others.go:168] "Using iptables Proxier"
	I0401 12:40:32.846360       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 12:40:32.846972       1 server.go:865] "Version info" version="v1.29.3"
	I0401 12:40:32.847176       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 12:40:32.849661       1 config.go:188] "Starting service config controller"
	I0401 12:40:32.849821       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 12:40:32.849850       1 config.go:97] "Starting endpoint slice config controller"
	I0401 12:40:32.849857       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 12:40:32.850966       1 config.go:315] "Starting node config controller"
	I0401 12:40:32.851069       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 12:40:32.951264       1 shared_informer.go:318] Caches are synced for node config
	I0401 12:40:32.951322       1 shared_informer.go:318] Caches are synced for service config
	I0401 12:40:32.951356       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [06e3c5e6bd4c] <==
	W0401 12:40:15.429238       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 12:40:15.429301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 12:40:15.479491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 12:40:15.479799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 12:40:15.575892       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 12:40:15.575996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0401 12:40:15.582700       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 12:40:15.582869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 12:40:15.596457       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 12:40:15.596506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 12:40:15.626558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 12:40:15.626761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 12:40:15.694771       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 12:40:15.695046       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 12:40:15.706788       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 12:40:15.706845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 12:40:15.744587       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 12:40:15.744860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 12:40:15.776916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 12:40:15.777013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 12:40:15.799367       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 12:40:15.805566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 12:40:15.862621       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 12:40:15.862761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0401 12:40:18.656856       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 12:45:18 multinode-965600 kubelet[2839]: E0401 12:45:18.653903    2839 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 12:45:18 multinode-965600 kubelet[2839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 12:45:18 multinode-965600 kubelet[2839]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 12:45:18 multinode-965600 kubelet[2839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 12:45:18 multinode-965600 kubelet[2839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 12:46:18 multinode-965600 kubelet[2839]: E0401 12:46:18.654801    2839 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 12:46:18 multinode-965600 kubelet[2839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 12:46:18 multinode-965600 kubelet[2839]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 12:46:18 multinode-965600 kubelet[2839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 12:46:18 multinode-965600 kubelet[2839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 12:47:18 multinode-965600 kubelet[2839]: E0401 12:47:18.654441    2839 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 12:47:18 multinode-965600 kubelet[2839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 12:47:18 multinode-965600 kubelet[2839]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 12:47:18 multinode-965600 kubelet[2839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 12:47:18 multinode-965600 kubelet[2839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 12:48:18 multinode-965600 kubelet[2839]: E0401 12:48:18.655352    2839 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 12:48:18 multinode-965600 kubelet[2839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 12:48:18 multinode-965600 kubelet[2839]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 12:48:18 multinode-965600 kubelet[2839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 12:48:18 multinode-965600 kubelet[2839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 12:49:18 multinode-965600 kubelet[2839]: E0401 12:49:18.661962    2839 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 12:49:18 multinode-965600 kubelet[2839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 12:49:18 multinode-965600 kubelet[2839]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 12:49:18 multinode-965600 kubelet[2839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 12:49:18 multinode-965600 kubelet[2839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [891f5bd08d88] <==
	I0401 12:40:42.704544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 12:40:42.717625       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 12:40:42.717935       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 12:40:42.740093       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 12:40:42.742252       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-965600_89db7636-5ee2-4db0-b079-443518af507b!
	I0401 12:40:42.742663       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c1535733-13af-4426-8790-ceb734773f61", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-965600_89db7636-5ee2-4db0-b079-443518af507b became leader
	I0401 12:40:42.843316       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-965600_89db7636-5ee2-4db0-b079-443518af507b!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:49:49.947169    8212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-965600 -n multinode-965600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-965600 -n multinode-965600: (13.0098464s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-965600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (517.76s)

                                                
                                    
x
+
TestPreload (596.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-171600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0401 12:53:23.491950    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-171600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m39.9337154s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-171600 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-171600 image pull gcr.io/k8s-minikube/busybox: (9.1439259s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-171600
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-171600: (41.0673934s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-171600 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0401 12:58:23.490996    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-171600 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: exit status 90 (3m11.4446779s)

                                                
                                                
-- stdout --
	* [test-preload-171600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the hyperv driver based on existing profile
	* Starting "test-preload-171600" primary control-plane node in "test-preload-171600" cluster
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing hyperv VM for "test-preload-171600" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:57:00.460474    7280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0401 12:57:00.540396    7280 out.go:291] Setting OutFile to fd 816 ...
	I0401 12:57:00.540595    7280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:57:00.540595    7280 out.go:304] Setting ErrFile to fd 948...
	I0401 12:57:00.541202    7280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 12:57:00.564510    7280 out.go:298] Setting JSON to false
	I0401 12:57:00.568549    7280 start.go:129] hostinfo: {"hostname":"minikube6","uptime":318978,"bootTime":1711657241,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 12:57:00.568549    7280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 12:57:00.825933    7280 out.go:177] * [test-preload-171600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 12:57:00.839812    7280 notify.go:220] Checking for updates...
	I0401 12:57:00.869811    7280 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 12:57:01.125834    7280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 12:57:01.232903    7280 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 12:57:01.398892    7280 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 12:57:01.572733    7280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 12:57:01.587889    7280 config.go:182] Loaded profile config "test-preload-171600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0401 12:57:01.616390    7280 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 12:57:01.738354    7280 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 12:57:07.669192    7280 out.go:177] * Using the hyperv driver based on existing profile
	I0401 12:57:07.724887    7280 start.go:297] selected driver: hyperv
	I0401 12:57:07.725082    7280 start.go:901] validating driver "hyperv" against &{Name:test-preload-171600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-171600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.155.228 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:57:07.725157    7280 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 12:57:07.782561    7280 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 12:57:07.782561    7280 cni.go:84] Creating CNI manager for ""
	I0401 12:57:07.783142    7280 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 12:57:07.783265    7280 start.go:340] cluster config:
	{Name:test-preload-171600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-171600 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.155.228 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 12:57:07.783265    7280 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 12:57:07.786718    7280 out.go:177] * Starting "test-preload-171600" primary control-plane node in "test-preload-171600" cluster
	I0401 12:57:07.791416    7280 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0401 12:57:07.831897    7280 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0401 12:57:07.831927    7280 cache.go:56] Caching tarball of preloaded images
	I0401 12:57:07.832316    7280 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0401 12:57:07.978490    7280 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0401 12:57:08.079807    7280 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0401 12:57:08.151302    7280 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4?checksum=md5:20cbd62a1b5d1968f21881a4a0f4f59e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0401 12:57:11.971089    7280 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0401 12:57:11.972003    7280 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0401 12:57:13.153835    7280 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on docker
	I0401 12:57:13.154731    7280 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\test-preload-171600\config.json ...
	I0401 12:57:13.156492    7280 start.go:360] acquireMachinesLock for test-preload-171600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 12:57:13.156492    7280 start.go:364] duration metric: took 0s to acquireMachinesLock for "test-preload-171600"
	I0401 12:57:13.157664    7280 start.go:96] Skipping create...Using existing machine configuration
	I0401 12:57:13.157664    7280 fix.go:54] fixHost starting: 
	I0401 12:57:13.158484    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:57:16.064479    7280 main.go:141] libmachine: [stdout =====>] : Off
	
	I0401 12:57:16.064656    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:16.064716    7280 fix.go:112] recreateIfNeeded on test-preload-171600: state=Stopped err=<nil>
	W0401 12:57:16.064716    7280 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 12:57:16.185340    7280 out.go:177] * Restarting existing hyperv VM for "test-preload-171600" ...
	I0401 12:57:16.323212    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM test-preload-171600
	I0401 12:57:19.774835    7280 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:57:19.774922    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:19.774986    7280 main.go:141] libmachine: Waiting for host to start...
	I0401 12:57:19.774986    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:57:22.182935    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:57:22.183023    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:22.183124    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:57:24.806210    7280 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:57:24.806896    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:25.812918    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:57:28.176894    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:57:28.177166    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:28.177166    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:57:30.857830    7280 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:57:30.857830    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:31.861110    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:57:34.194953    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:57:34.194953    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:34.195159    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:57:36.912181    7280 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:57:36.912181    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:37.924780    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:57:40.254187    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:57:40.254750    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:40.254750    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:57:42.919632    7280 main.go:141] libmachine: [stdout =====>] : 
	I0401 12:57:42.919992    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:43.928048    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:57:46.283986    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:57:46.284385    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:46.284385    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:57:48.997567    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:57:48.997567    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:49.000174    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:57:51.228388    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:57:51.228577    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:51.228577    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:57:53.894862    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:57:53.894862    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:53.895224    7280 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\test-preload-171600\config.json ...
	I0401 12:57:53.897712    7280 machine.go:94] provisionDockerMachine start ...
	I0401 12:57:53.897819    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:57:56.143297    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:57:56.143474    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:56.143590    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:57:58.869067    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:57:58.869067    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:57:58.876009    7280 main.go:141] libmachine: Using SSH client type: native
	I0401 12:57:58.876555    7280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.158.106 22 <nil> <nil>}
	I0401 12:57:58.876555    7280 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 12:57:59.013022    7280 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 12:57:59.013022    7280 buildroot.go:166] provisioning hostname "test-preload-171600"
	I0401 12:57:59.013022    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:01.234854    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:01.234854    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:01.235310    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:03.910795    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:03.910795    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:03.916093    7280 main.go:141] libmachine: Using SSH client type: native
	I0401 12:58:03.916322    7280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.158.106 22 <nil> <nil>}
	I0401 12:58:03.916852    7280 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-171600 && echo "test-preload-171600" | sudo tee /etc/hostname
	I0401 12:58:04.087325    7280 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-171600
	
	I0401 12:58:04.087325    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:06.371261    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:06.371261    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:06.371361    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:09.083112    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:09.083389    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:09.089835    7280 main.go:141] libmachine: Using SSH client type: native
	I0401 12:58:09.090654    7280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.158.106 22 <nil> <nil>}
	I0401 12:58:09.090654    7280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-171600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-171600/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-171600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 12:58:09.239158    7280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 12:58:09.239158    7280 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0401 12:58:09.239158    7280 buildroot.go:174] setting up certificates
	I0401 12:58:09.239158    7280 provision.go:84] configureAuth start
	I0401 12:58:09.239158    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:11.504434    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:11.504802    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:11.504802    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:14.266742    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:14.266742    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:14.267240    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:16.559619    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:16.559857    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:16.559857    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:19.247629    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:19.247833    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:19.247833    7280 provision.go:143] copyHostCerts
	I0401 12:58:19.247892    7280 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0401 12:58:19.247892    7280 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0401 12:58:19.248802    7280 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0401 12:58:19.249751    7280 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0401 12:58:19.249751    7280 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0401 12:58:19.250553    7280 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0401 12:58:19.251426    7280 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0401 12:58:19.251426    7280 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0401 12:58:19.252112    7280 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0401 12:58:19.253034    7280 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.test-preload-171600 san=[127.0.0.1 172.19.158.106 localhost minikube test-preload-171600]
	I0401 12:58:19.786314    7280 provision.go:177] copyRemoteCerts
	I0401 12:58:19.798056    7280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 12:58:19.798056    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:22.033977    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:22.034796    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:22.034890    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:24.741834    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:24.742443    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:24.742502    7280 sshutil.go:53] new ssh client: &{IP:172.19.158.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-171600\id_rsa Username:docker}
	I0401 12:58:24.857883    7280 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0597916s)
	I0401 12:58:24.858790    7280 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 12:58:24.910899    7280 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 12:58:24.961315    7280 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 12:58:25.018292    7280 provision.go:87] duration metric: took 15.7790243s to configureAuth
	I0401 12:58:25.018292    7280 buildroot.go:189] setting minikube options for container-runtime
	I0401 12:58:25.018292    7280 config.go:182] Loaded profile config "test-preload-171600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0401 12:58:25.018292    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:27.268254    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:27.268485    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:27.268567    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:29.948309    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:29.948533    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:29.954292    7280 main.go:141] libmachine: Using SSH client type: native
	I0401 12:58:29.954868    7280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.158.106 22 <nil> <nil>}
	I0401 12:58:29.954868    7280 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0401 12:58:30.095049    7280 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0401 12:58:30.095146    7280 buildroot.go:70] root file system type: tmpfs
	I0401 12:58:30.095291    7280 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0401 12:58:30.095437    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:32.360790    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:32.360790    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:32.360790    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:35.145994    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:35.145994    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:35.153118    7280 main.go:141] libmachine: Using SSH client type: native
	I0401 12:58:35.154620    7280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.158.106 22 <nil> <nil>}
	I0401 12:58:35.154778    7280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0401 12:58:35.333404    7280 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0401 12:58:35.333404    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:37.589890    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:37.590556    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:37.590556    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:40.311487    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:40.311487    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:40.317689    7280 main.go:141] libmachine: Using SSH client type: native
	I0401 12:58:40.318491    7280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.158.106 22 <nil> <nil>}
	I0401 12:58:40.318491    7280 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0401 12:58:42.859975    7280 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0401 12:58:42.859975    7280 machine.go:97] duration metric: took 48.9619205s to provisionDockerMachine
	I0401 12:58:42.859975    7280 start.go:293] postStartSetup for "test-preload-171600" (driver="hyperv")
	I0401 12:58:42.859975    7280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 12:58:42.874638    7280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 12:58:42.874638    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:45.138099    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:45.138099    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:45.139026    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:47.821416    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:47.821628    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:47.821737    7280 sshutil.go:53] new ssh client: &{IP:172.19.158.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-171600\id_rsa Username:docker}
	I0401 12:58:47.927019    7280 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0523453s)
	I0401 12:58:47.938638    7280 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 12:58:47.945709    7280 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 12:58:47.945838    7280 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0401 12:58:47.946380    7280 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0401 12:58:47.947963    7280 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem -> 12602.pem in /etc/ssl/certs
	I0401 12:58:47.962550    7280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 12:58:47.981866    7280 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\12602.pem --> /etc/ssl/certs/12602.pem (1708 bytes)
	I0401 12:58:48.030606    7280 start.go:296] duration metric: took 5.170594s for postStartSetup
	I0401 12:58:48.030786    7280 fix.go:56] duration metric: took 1m34.8724037s for fixHost
	I0401 12:58:48.030854    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:50.263664    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:50.263664    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:50.263664    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:53.003243    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:53.003325    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:53.009223    7280 main.go:141] libmachine: Using SSH client type: native
	I0401 12:58:53.009867    7280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.158.106 22 <nil> <nil>}
	I0401 12:58:53.009867    7280 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 12:58:53.157630    7280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711976333.148696789
	
	I0401 12:58:53.157719    7280 fix.go:216] guest clock: 1711976333.148696789
	I0401 12:58:53.157719    7280 fix.go:229] Guest: 2024-04-01 12:58:53.148696789 +0000 UTC Remote: 2024-04-01 12:58:48.0307862 +0000 UTC m=+107.683977601 (delta=5.117910589s)
	I0401 12:58:53.157870    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:58:55.416128    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:58:55.416953    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:55.417032    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:58:58.129755    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:58:58.130581    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:58:58.137129    7280 main.go:141] libmachine: Using SSH client type: native
	I0401 12:58:58.137451    7280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf69f80] 0xf6cb60 <nil>  [] 0s} 172.19.158.106 22 <nil> <nil>}
	I0401 12:58:58.137451    7280 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711976333
	I0401 12:58:58.277453    7280 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  1 12:58:53 UTC 2024
	
	I0401 12:58:58.277453    7280 fix.go:236] clock set: Mon Apr  1 12:58:53 UTC 2024
	 (err=<nil>)
	I0401 12:58:58.277453    7280 start.go:83] releasing machines lock for "test-preload-171600", held for 1m45.120225s
	I0401 12:58:58.277453    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:59:00.525687    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:59:00.525687    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:59:00.526189    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:59:03.217283    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:59:03.217283    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:59:03.221766    7280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 12:59:03.221891    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:59:03.234173    7280 ssh_runner.go:195] Run: cat /version.json
	I0401 12:59:03.234173    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-171600 ).state
	I0401 12:59:05.526765    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:59:05.526765    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:59:05.527161    7280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0401 12:59:05.527220    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:59:05.527352    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:59:05.527449    7280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-171600 ).networkadapters[0]).ipaddresses[0]
	I0401 12:59:08.322917    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:59:08.322917    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:59:08.322917    7280 sshutil.go:53] new ssh client: &{IP:172.19.158.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-171600\id_rsa Username:docker}
	I0401 12:59:08.354683    7280 main.go:141] libmachine: [stdout =====>] : 172.19.158.106
	
	I0401 12:59:08.354683    7280 main.go:141] libmachine: [stderr =====>] : 
	I0401 12:59:08.354683    7280 sshutil.go:53] new ssh client: &{IP:172.19.158.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-171600\id_rsa Username:docker}
	I0401 12:59:08.492335    7280 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2705318s)
	I0401 12:59:08.492335    7280 ssh_runner.go:235] Completed: cat /version.json: (5.2581247s)
	I0401 12:59:08.505931    7280 ssh_runner.go:195] Run: systemctl --version
	I0401 12:59:08.530229    7280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 12:59:08.539646    7280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 12:59:08.552300    7280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 12:59:08.583545    7280 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 12:59:08.583545    7280 start.go:494] detecting cgroup driver to use...
	I0401 12:59:08.583822    7280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:59:08.631891    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0401 12:59:08.666177    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 12:59:08.687111    7280 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 12:59:08.699741    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 12:59:08.732938    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:59:08.765776    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 12:59:08.798831    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 12:59:08.830977    7280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 12:59:08.863349    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 12:59:08.897054    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 12:59:08.931124    7280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 12:59:08.968775    7280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 12:59:09.002613    7280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 12:59:09.040515    7280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:59:09.265619    7280 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 12:59:09.304179    7280 start.go:494] detecting cgroup driver to use...
	I0401 12:59:09.318620    7280 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0401 12:59:09.356709    7280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:59:09.399251    7280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 12:59:09.449670    7280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 12:59:09.487451    7280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:59:09.526589    7280 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 12:59:09.593012    7280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 12:59:09.619518    7280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 12:59:09.672457    7280 ssh_runner.go:195] Run: which cri-dockerd
	I0401 12:59:09.691404    7280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0401 12:59:09.711254    7280 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0401 12:59:09.756509    7280 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0401 12:59:09.984181    7280 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0401 12:59:10.207637    7280 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0401 12:59:10.207978    7280 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0401 12:59:10.260969    7280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 12:59:10.487137    7280 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0401 13:00:11.642621    7280 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1550503s)
	I0401 13:00:11.656029    7280 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0401 13:00:11.694997    7280 out.go:177] 
	W0401 13:00:11.697466    7280 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 12:58:40 test-preload-171600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:58:40 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:40.983943594Z" level=info msg="Starting up"
	Apr 01 12:58:40 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:40.985875866Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 12:58:40 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:40.989919006Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.029403846Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.060776110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.060836209Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.060937608Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.060958207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.061791496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.061917094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.062253289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.062362788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.062388588Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.062405287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.063072678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.064790254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.069272692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.069426590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.069682586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.069823084Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.070463875Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.070599173Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.070621873Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085291969Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085501266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085533366Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085556065Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085575665Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085672464Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.086752849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.086957046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.087595237Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.087905233Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.087931732Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.087967032Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088153329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088364426Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088415326Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088553724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088577623Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088609623Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088632723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088678922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088694922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088711022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088725621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088838920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088943518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088967718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088994618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089014017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089027817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089080516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089097916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089115716Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089140016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089161415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089179515Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089312413Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089338113Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089351913Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089363412Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089514510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089564910Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089586609Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089850306Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089911805Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089959504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089989304Z" level=info msg="containerd successfully booted in 0.063344s"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.047945522Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.186505216Z" level=info msg="Loading containers: start."
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.662889989Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.760508247Z" level=info msg="Loading containers: done."
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.787616613Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.788329807Z" level=info msg="Daemon has completed initialization"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.848795985Z" level=info msg="API listen on [::]:2376"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.848960084Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 12:58:42 test-preload-171600 systemd[1]: Started Docker Application Container Engine.
	Apr 01 12:59:10 test-preload-171600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.505024696Z" level=info msg="Processing signal 'terminated'"
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.508047409Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.509441616Z" level=info msg="Daemon shutdown complete"
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.509718817Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.509917318Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 12:59:11 test-preload-171600 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 12:59:11 test-preload-171600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 12:59:11 test-preload-171600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:59:11 test-preload-171600 dockerd[1051]: time="2024-04-01T12:59:11.598687720Z" level=info msg="Starting up"
	Apr 01 13:00:11 test-preload-171600 dockerd[1051]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 13:00:11 test-preload-171600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 13:00:11 test-preload-171600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 13:00:11 test-preload-171600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 12:58:40 test-preload-171600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:58:40 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:40.983943594Z" level=info msg="Starting up"
	Apr 01 12:58:40 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:40.985875866Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 12:58:40 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:40.989919006Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.029403846Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.060776110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.060836209Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.060937608Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.060958207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.061791496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.061917094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.062253289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.062362788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.062388588Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.062405287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.063072678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.064790254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.069272692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.069426590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.069682586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.069823084Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.070463875Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.070599173Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.070621873Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085291969Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085501266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085533366Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085556065Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085575665Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.085672464Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.086752849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.086957046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.087595237Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.087905233Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.087931732Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.087967032Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088153329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088364426Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088415326Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088553724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088577623Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088609623Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088632723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088678922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088694922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088711022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088725621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088838920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088943518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088967718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.088994618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089014017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089027817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089080516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089097916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089115716Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089140016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089161415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089179515Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089312413Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089338113Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089351913Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089363412Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089514510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089564910Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089586609Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089850306Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089911805Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089959504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 12:58:41 test-preload-171600 dockerd[664]: time="2024-04-01T12:58:41.089989304Z" level=info msg="containerd successfully booted in 0.063344s"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.047945522Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.186505216Z" level=info msg="Loading containers: start."
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.662889989Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.760508247Z" level=info msg="Loading containers: done."
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.787616613Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.788329807Z" level=info msg="Daemon has completed initialization"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.848795985Z" level=info msg="API listen on [::]:2376"
	Apr 01 12:58:42 test-preload-171600 dockerd[658]: time="2024-04-01T12:58:42.848960084Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 12:58:42 test-preload-171600 systemd[1]: Started Docker Application Container Engine.
	Apr 01 12:59:10 test-preload-171600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.505024696Z" level=info msg="Processing signal 'terminated'"
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.508047409Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.509441616Z" level=info msg="Daemon shutdown complete"
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.509718817Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 12:59:10 test-preload-171600 dockerd[658]: time="2024-04-01T12:59:10.509917318Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 12:59:11 test-preload-171600 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 12:59:11 test-preload-171600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 12:59:11 test-preload-171600 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 12:59:11 test-preload-171600 dockerd[1051]: time="2024-04-01T12:59:11.598687720Z" level=info msg="Starting up"
	Apr 01 13:00:11 test-preload-171600 dockerd[1051]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 13:00:11 test-preload-171600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 13:00:11 test-preload-171600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 13:00:11 test-preload-171600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0401 13:00:11.698076    7280 out.go:239] * 
	* 
	W0401 13:00:11.699632    7280 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 13:00:11.702391    7280 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:68: out/minikube-windows-amd64.exe start -p test-preload-171600 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv failed: exit status 90
panic.go:626: *** TestPreload FAILED at 2024-04-01 13:00:11.9217003 +0000 UTC m=+9573.327105001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-171600 -n test-preload-171600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-171600 -n test-preload-171600: exit status 6 (12.9657097s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:00:12.086989   13724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 13:00:24.816989   13724 status.go:417] kubeconfig endpoint: get endpoint: "test-preload-171600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-171600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-171600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-171600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-171600: (1m1.4898073s)
--- FAIL: TestPreload (596.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (10800.558s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-492300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-492300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m57.3468878s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-492300
E0401 13:18:23.495978    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-492300: (42.7024754s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-492300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-492300 status --format={{.Host}}: exit status 7 (2.684059s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:18:57.549688    3720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-492300 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=hyperv
E0401 13:19:46.781834    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestForceSystemdFlag (5m49s)
	TestKubernetesUpgrade (8m21s)
	TestRunningBinaryUpgrade (13m22s)
	TestStartStop (13m22s)
	TestStoppedBinaryUpgrade (8m21s)
	TestStoppedBinaryUpgrade/Upgrade (8m21s)

                                                
                                                
goroutine 1529 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0004d11e0, 0xc000847bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006925e8, {0x50051a0, 0x2a, 0x2a}, {0x2d663a0?, 0xc781af?, 0x5027980?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0009457c0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0009457c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 26 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000494800)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1558 [syscall, locked to thread]:
syscall.SyscallN(0xc000be3000?, {0xc000a63b20?, 0xbd7f45?, 0x50b4de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1?, 0xc000a63b80?, 0xbcfe76?, 0x50b4de0?, 0xc000a63c08?, 0xbc2a45?, 0x172b7800108?, 0x4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6a0, {0xc00241a33d?, 0x4c3, 0xc742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002462a08?, {0xc00241a33d?, 0xbc28db?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002462a08, {0xc00241a33d, 0x4c3, 0x4c3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0009dc2b0, {0xc00241a33d?, 0xc0026a2798?, 0x23d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028593b0, {0x3cbacc0, 0xc0009dc308})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3cbae00, 0xc0028593b0}, {0x3cbacc0, 0xc0009dc308}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x10?, {0x3cbae00, 0xc0028593b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fb9aa0?, {0x3cbae00?, 0xc0028593b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3cbae00, 0xc0028593b0}, {0x3cbad80, 0xc0009dc2b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0009a7600?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1540
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 568 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000963380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000963380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000963380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc000963380, 0x376fb68)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 52 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 43
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 1543 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc00081a2c0, 0xc0026ea0c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 567
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1436 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00243c9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00243c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00243c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00243c9c0, 0xc0028381c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1433
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 564 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009624e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009624e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0009624e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc0009624e0, 0x376fb30)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1454 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffd4cfa4de0?, {0xc000b6d798?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x318, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0024f2990)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a742c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a742c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00229bba0, 0xc000a742c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00229bba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:243 +0xaff
testing.tRunner(0xc00229bba0, 0x376fbd8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 567 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffd4cfa4de0?, {0xc000841a80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x684, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0009d85d0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00081a2c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00081a2c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0009631e0, 0xc00081a2c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0009631e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:91 +0x347
testing.tRunner(0xc0009631e0, 0x376fb70)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1549 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a742c0, 0xc0022fc120)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1454
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 155 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007eca50, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2825b20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a669c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007ecb00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000880af0, {0x3cbc100, 0xc000a736b0}, 0x1, 0xc000054720)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000880af0, 0x3b9aca00, 0x0, 0x1, 0xc000054720)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 127
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 157 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 156
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 156 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3cde980, 0xc000054720}, 0xc000a7bf50, 0xc000a7bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3cde980, 0xc000054720}, 0xc0?, 0xc000a7bf50, 0xc000a7bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3cde980?, 0xc000054720?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd4e685?, 0xc0008426e0?, 0xc00083a3c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 127
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1426 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00229a680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00229a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00229a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc00229a680, 0x376fc10)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1540 [syscall, 10 minutes, locked to thread]:
syscall.SyscallN(0x7ffd4cfa4de0?, {0xc00006b6a8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x2e8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0029a1560)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0006ed600)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0006ed600)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000b22820, 0xc0006ed600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc00006bc20?, {0x3cc8238, 0xc00010a2c0}, 0x3770e00, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x3cc8238?, 0xc00010a2c0?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc000adfe28, 0x3b9aca00, 0x1a3185c5000, {0xc000adfd08?, 0x2825b20?, 0xc0f348?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc000b22820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc000b22820, 0xc002324140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1453
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 126 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a66ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 164
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 127 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007ecb00, 0xc000054720)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 164
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1548 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc78280?, {0xc002147b20?, 0xbd7f45?, 0x50b4de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x8?, 0xc002147b80?, 0xbcfe76?, 0x50b4de0?, 0xc002147c08?, 0xbc2a45?, 0x172b7800108?, 0xc0027d5f77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x2d8, {0xc000a92242?, 0x1dbe, 0xc742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002300f08?, {0xc000a92242?, 0x21b1?, 0x21b1?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002300f08, {0xc000a92242, 0x1dbe, 0x1dbe})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b840b8, {0xc000a92242?, 0xc002147d98?, 0x1e4f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a88210, {0x3cbacc0, 0xc000704498})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3cbae00, 0xc000a88210}, {0x3cbacc0, 0xc000704498}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3cbae00, 0xc000a88210})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fb9aa0?, {0x3cbae00?, 0xc000a88210?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3cbae00, 0xc000a88210}, {0x3cbad80, 0xc000b840b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026eb0e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1454
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 566 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000962820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000962820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc000962820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc000962820, 0x376fb40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1541 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc000b3fb10?, {0xc000b3fb20?, 0xbd7f45?, 0x50b4de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10000c0001f8767?, 0xc000b3fb80?, 0xbcfe76?, 0x50b4de0?, 0xc000b3fc08?, 0xbc28db?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6c0, {0xc0025a1256?, 0x5aa, 0xc742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002802788?, {0xc0025a1256?, 0x0?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002802788, {0xc0025a1256, 0x5aa, 0x5aa})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b0c090, {0xc0025a1256?, 0x172b7a61228?, 0x217?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002266060, {0x3cbacc0, 0xc000141820})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3cbae00, 0xc002266060}, {0x3cbacc0, 0xc000141820}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3cbae00, 0xc002266060})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fb9aa0?, {0x3cbae00?, 0xc002266060?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3cbae00, 0xc002266060}, {0x3cbad80, 0xc000b0c090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00083a060?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 567
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1450 [chan receive, 14 minutes]:
testing.(*T).Run(0xc00229b380, {0x2d0ba93?, 0xd07613?}, 0x376fe30)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00229b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00229b380, 0x376fc58)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1452 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffd4cfa4de0?, {0xc000b5b960?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x620, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0024f2840)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a74000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a74000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00229b860, 0xc000a74000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc00229b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc00229b860, 0x376fc38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1453 [chan receive, 10 minutes]:
testing.(*T).Run(0xc00229ba00, {0x2d0fa33?, 0x3005753e800?}, 0xc002324140)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc00229ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc00229ba00, 0x376fc60)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1547 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002903b20?, 0xbd7f45?, 0x50b4de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x890000c002903bd0?, 0xc002903b80?, 0xbcfe76?, 0x50b4de0?, 0xc002903c08?, 0xbc2a45?, 0x172b7800108?, 0x3cde64d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6d0, {0xc000825a15?, 0x5eb, 0xc000825800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002300788?, {0xc000825a15?, 0x0?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002300788, {0xc000825a15, 0x5eb, 0x5eb})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b84068, {0xc000825a15?, 0x172b7a61228?, 0x215?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a881b0, {0x3cbacc0, 0xc000141830})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3cbae00, 0xc000a881b0}, {0x3cbacc0, 0xc000141830}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002903e78?, {0x3cbae00, 0xc000a881b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fb9aa0?, {0x3cbae00?, 0xc000a881b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3cbae00, 0xc000a881b0}, {0x3cbad80, 0xc000b84068}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026ea120?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1454
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1562 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0xc000acb000?, {0xc0028e5b20?, 0xbd7f45?, 0x50b4de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x77?, 0xc0028e5b80?, 0xbcfe76?, 0x50b4de0?, 0xc0028e5c08?, 0xbc2a45?, 0x172b7800108?, 0x77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x66c, {0xc000a9a218?, 0x1de8, 0xc742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002300a08?, {0xc000a9a218?, 0xbfc25e?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002300a08, {0xc000a9a218, 0x1de8, 0x1de8})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007043b0, {0xc000a9a218?, 0xc0028e5d98?, 0x1e40?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a880c0, {0x3cbacc0, 0xc000b0c128})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3cbae00, 0xc000a880c0}, {0x3cbacc0, 0xc000b0c128}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3cbae00, 0xc000a880c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fb9aa0?, {0x3cbae00?, 0xc000a880c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3cbae00, 0xc000a880c0}, {0x3cbad80, 0xc0007043b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0029e7ec0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1452
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1439 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00243d1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00243d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00243d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00243d1e0, 0xc0028382c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1433
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 565 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000962680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000962680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc000962680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc000962680, 0x376fb28)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 651 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x172fd36b8a0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xbcfe76?, 0x50b4de0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc000b7c020, 0xc000ae9bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000b7c008, 0x1ec, {0xc000838000?, 0x0?, 0x0?}, 0xc0004a4008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000b7c008, 0xc000ae9d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000b7c008)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc00287e080)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00287e080)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0009300f0, {0x3cd24d0, 0xc00287e080})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0009300f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00229a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 648
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1561 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0xc000aebb10?, {0xc000aebb20?, 0xbd7f45?, 0x50b4de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4fd5480?, 0xc000aebb80?, 0xbcfe76?, 0x50b4de0?, 0xc000aebc08?, 0xbc2a45?, 0x172b7800108?, 0xc000aebd4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3cc, {0xc00241aa76?, 0x58a, 0xc742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002300508?, {0xc00241aa76?, 0xbfc211?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002300508, {0xc00241aa76, 0x58a, 0x58a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000704398, {0xc00241aa76?, 0xc000aebd98?, 0x213?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a88090, {0x3cbacc0, 0xc000b84028})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3cbae00, 0xc000a88090}, {0x3cbacc0, 0xc000b84028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3cbae00, 0xc000a88090})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fb9aa0?, {0x3cbae00?, 0xc000a88090?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3cbae00, 0xc000a88090}, {0x3cbad80, 0xc000704398}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022fc0c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1452
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1560 [select, 10 minutes]:
os/exec.(*Cmd).watchCtx(0xc0006ed600, 0xc0026ebda0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1540
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1435 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00243c820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00243c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00243c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00243c820, 0xc002838180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1433
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1433 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00243c340, 0x376fe30)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1450
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1542 [syscall, locked to thread]:
syscall.SyscallN(0xc00097bb10?, {0xc00097bb20?, 0xbd7f45?, 0x5034840?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x100000000002041?, 0xc00097bb80?, 0xbcfe76?, 0x50b4de0?, 0xc00097bc08?, 0xbc2a45?, 0xc00240f180?, 0x8000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6c4, {0xc000b61a5e?, 0x25a2, 0xc742bf?}, 0xc00097bc04?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002802f08?, {0xc000b61a5e?, 0x0?, 0x8000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002802f08, {0xc000b61a5e, 0x25a2, 0x25a2})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b0c0f8, {0xc000b61a5e?, 0xc00097bd30?, 0x3e60?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002266150, {0x3cbacc0, 0xc0009dc108})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3cbae00, 0xc002266150}, {0x3cbacc0, 0xc0009dc108}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00097be78?, {0x3cbae00, 0xc002266150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fb9aa0?, {0x3cbae00?, 0xc002266150?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3cbae00, 0xc002266150}, {0x3cbad80, 0xc000b0c0f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022fc240?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 567
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1559 [syscall, 10 minutes, locked to thread]:
syscall.SyscallN(0xc000aafb10?, {0xc000aafb20?, 0xbd7f45?, 0x50b4de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x100000000002077?, 0xc000aafb80?, 0xbcfe76?, 0x50b4de0?, 0xc000aafc08?, 0xbc2a45?, 0x172b7800eb8?, 0x35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6a8, {0xc00047f200?, 0x200, 0xc00047f200?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002462f08?, {0xc00047f200?, 0x0?, 0x200?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002462f08, {0xc00047f200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0009dc2e8, {0xc00047f200?, 0x172fcec3168?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028593e0, {0x3cbacc0, 0xc0005b0000})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3cbae00, 0xc0028593e0}, {0x3cbacc0, 0xc0005b0000}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3cbae00, 0xc0028593e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fb9aa0?, {0x3cbae00?, 0xc0028593e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3cbae00, 0xc0028593e0}, {0x3cbad80, 0xc0009dc2e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022fc0c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1540
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1563 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a74000, 0xc0022fc060)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1452
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1438 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00243d040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00243d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00243d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00243d040, 0xc002838240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1433
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1434 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00243c4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00243c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00243c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00243c4e0, 0xc002838140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1433
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1437 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0009cb0e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00243cb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00243cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00243cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00243cb60, 0xc002838200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1433
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    
x
+
TestPause/serial/Start (390.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-128300 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-128300 --memory=2048 --install-addons=false --wait=all --driver=hyperv: exit status 90 (6m17.4540476s)

                                                
                                                
-- stdout --
	* [pause-128300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "pause-128300" primary control-plane node in "pause-128300" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:07:16.578753   12236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 01 13:11:55 pause-128300 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 13:11:55 pause-128300 dockerd[663]: time="2024-04-01T13:11:55.322044589Z" level=info msg="Starting up"
	Apr 01 13:11:55 pause-128300 dockerd[663]: time="2024-04-01T13:11:55.323734166Z" level=info msg="containerd not running, starting managed containerd"
	Apr 01 13:11:55 pause-128300 dockerd[663]: time="2024-04-01T13:11:55.327773412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.370894331Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.404100084Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.404315281Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.404413380Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.404521978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.404764575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.404896073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.405431866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.405469465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.405488365Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.405501265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.405786161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.406226055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.411918878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.412053877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.412441271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.412487471Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.412624469Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.412789067Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.412810466Z" level=info msg="metadata content store policy set" policy=shared
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.442247070Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.442442067Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.442486267Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.442507166Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.442525466Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.442722364Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443375155Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443677251Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443781149Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443804349Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443820949Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443847648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443862948Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443881048Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443901948Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443918747Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443933747Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.443948247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444024146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444056346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444080645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444100845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444115745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444132245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444147244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444162344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444219443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444250843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444273543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444293342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444308342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444327742Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444357742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444373341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444394241Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444535239Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444582239Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444599138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444611038Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444746636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444915134Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.444943534Z" level=info msg="NRI interface is disabled by configuration."
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.445446427Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.445606525Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.446243916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 01 13:11:55 pause-128300 dockerd[669]: time="2024-04-01T13:11:55.446327115Z" level=info msg="containerd successfully booted in 0.077173s"
	Apr 01 13:11:56 pause-128300 dockerd[663]: time="2024-04-01T13:11:56.417870348Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 01 13:11:56 pause-128300 dockerd[663]: time="2024-04-01T13:11:56.448027367Z" level=info msg="Loading containers: start."
	Apr 01 13:11:56 pause-128300 dockerd[663]: time="2024-04-01T13:11:56.767063450Z" level=info msg="Loading containers: done."
	Apr 01 13:11:56 pause-128300 dockerd[663]: time="2024-04-01T13:11:56.795676086Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 01 13:11:56 pause-128300 dockerd[663]: time="2024-04-01T13:11:56.796086984Z" level=info msg="Daemon has completed initialization"
	Apr 01 13:11:56 pause-128300 dockerd[663]: time="2024-04-01T13:11:56.918734980Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 01 13:11:56 pause-128300 dockerd[663]: time="2024-04-01T13:11:56.919143977Z" level=info msg="API listen on [::]:2376"
	Apr 01 13:11:56 pause-128300 systemd[1]: Started Docker Application Container Engine.
	Apr 01 13:12:32 pause-128300 dockerd[663]: time="2024-04-01T13:12:32.660570152Z" level=info msg="Processing signal 'terminated'"
	Apr 01 13:12:32 pause-128300 systemd[1]: Stopping Docker Application Container Engine...
	Apr 01 13:12:32 pause-128300 dockerd[663]: time="2024-04-01T13:12:32.663085269Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 01 13:12:32 pause-128300 dockerd[663]: time="2024-04-01T13:12:32.664070176Z" level=info msg="Daemon shutdown complete"
	Apr 01 13:12:32 pause-128300 dockerd[663]: time="2024-04-01T13:12:32.664235877Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 01 13:12:32 pause-128300 dockerd[663]: time="2024-04-01T13:12:32.664270577Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 01 13:12:33 pause-128300 systemd[1]: docker.service: Deactivated successfully.
	Apr 01 13:12:33 pause-128300 systemd[1]: Stopped Docker Application Container Engine.
	Apr 01 13:12:33 pause-128300 systemd[1]: Starting Docker Application Container Engine...
	Apr 01 13:12:33 pause-128300 dockerd[1020]: time="2024-04-01T13:12:33.752932374Z" level=info msg="Starting up"
	Apr 01 13:13:33 pause-128300 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 01 13:13:33 pause-128300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 01 13:13:33 pause-128300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 01 13:13:33 pause-128300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p pause-128300 --memory=2048 --install-addons=false --wait=all --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-128300 -n pause-128300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-128300 -n pause-128300: exit status 6 (13.0899366s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:13:34.031287   13568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0401 13:13:46.910793   13568 status.go:417] kubeconfig endpoint: get endpoint: "pause-128300" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "pause-128300" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestPause/serial/Start (390.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-035600 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-035600 --driver=hyperv: exit status 1 (4m59.6184528s)

                                                
                                                
-- stdout --
	* [NoKubernetes-035600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-035600" primary control-plane node in "NoKubernetes-035600" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:07:17.019939    5816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-035600 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-035600 -n NoKubernetes-035600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-035600 -n NoKubernetes-035600: exit status 7 (266.9858ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:12:16.593745    7128 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-035600" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.89s)

                                                
                                    

Test pass (92/146)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.03
4 TestDownloadOnly/v1.20.0/preload-exists 0.07
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 1.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.38
12 TestDownloadOnly/v1.29.3/json-events 11.74
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.3
18 TestDownloadOnly/v1.29.3/DeleteAll 1.42
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 1.38
21 TestDownloadOnly/v1.30.0-beta.0/json-events 13.8
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.44
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 1.38
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 1.4
30 TestBinaryMirror 7.66
31 TestOffline 300.94
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.29
36 TestAddons/Setup 398.82
39 TestAddons/parallel/Ingress 66.14
40 TestAddons/parallel/InspektorGadget 26.23
41 TestAddons/parallel/MetricsServer 22.59
42 TestAddons/parallel/HelmTiller 37.24
44 TestAddons/parallel/CSI 73.52
45 TestAddons/parallel/Headlamp 41.8
46 TestAddons/parallel/CloudSpanner 22.61
47 TestAddons/parallel/LocalPath 32.75
48 TestAddons/parallel/NvidiaDevicePlugin 20.61
49 TestAddons/parallel/Yakd 5.02
52 TestAddons/serial/GCPAuth/Namespaces 0.36
53 TestAddons/StoppedEnableDisable 56.19
65 TestErrorSpam/start 18.32
66 TestErrorSpam/status 38.31
67 TestErrorSpam/pause 24.06
68 TestErrorSpam/unpause 24.1
69 TestErrorSpam/stop 63.67
72 TestFunctional/serial/CopySyncFile 0.04
73 TestFunctional/serial/StartWithProxy 218
74 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/KubeContext 0.15
80 TestFunctional/serial/CacheCmd/cache/add_remote 348.25
81 TestFunctional/serial/CacheCmd/cache/add_local 60.81
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
83 TestFunctional/serial/CacheCmd/cache/list 0.3
86 TestFunctional/serial/CacheCmd/cache/delete 0.55
93 TestFunctional/delete_addon-resizer_images 0.02
94 TestFunctional/delete_my-image_image 0.01
95 TestFunctional/delete_minikube_cached_images 0.01
99 TestMultiControlPlane/serial/StartCluster 855.58
100 TestMultiControlPlane/serial/DeployApp 13.04
102 TestMultiControlPlane/serial/AddWorkerNode 268.21
103 TestMultiControlPlane/serial/NodeLabels 0.22
104 TestMultiControlPlane/serial/HAppyAfterClusterStart 30.48
108 TestImageBuild/serial/Setup 211.4
109 TestImageBuild/serial/NormalBuild 10.08
110 TestImageBuild/serial/BuildWithBuildArg 9.45
111 TestImageBuild/serial/BuildWithDockerIgnore 8.24
112 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.15
116 TestJSONOutput/start/Command 222.19
117 TestJSONOutput/start/Audit 0
119 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
120 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
122 TestJSONOutput/pause/Command 8.4
123 TestJSONOutput/pause/Audit 0
125 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
126 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
128 TestJSONOutput/unpause/Command 8.35
129 TestJSONOutput/unpause/Audit 0
131 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
132 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
134 TestJSONOutput/stop/Command 35.56
135 TestJSONOutput/stop/Audit 0
137 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
138 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
139 TestErrorJSONOutput 1.64
144 TestMainNoArgs 0.26
145 TestMinikubeProfile 570.43
148 TestMountStart/serial/StartWithMountFirst 164.25
149 TestMountStart/serial/VerifyMountFirst 10.09
150 TestMountStart/serial/StartWithMountSecond 165.48
151 TestMountStart/serial/VerifyMountSecond 10.13
152 TestMountStart/serial/DeleteFirst 29.07
153 TestMountStart/serial/VerifyMountPostDelete 10.16
154 TestMountStart/serial/Stop 31.89
155 TestMountStart/serial/RestartStopped 125.02
156 TestMountStart/serial/VerifyMountPostStop 9.89
177 TestScheduledStopWindows 350.05
187 TestNoKubernetes/serial/StartNoK8sWithVersion 0.41
x
+
TestDownloadOnly/v1.20.0/json-events (19.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-452300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-452300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (19.0268463s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-452300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-452300: exit status 85 (296.8627ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-452300 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC |          |
	|         | -p download-only-452300        |                      |                   |                |                     |          |
	|         | --force --alsologtostderr      |                      |                   |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |          |
	|         | --container-runtime=docker     |                      |                   |                |                     |          |
	|         | --driver=hyperv                |                      |                   |                |                     |          |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:20:38
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:20:38.806244   12084 out.go:291] Setting OutFile to fd 576 ...
	I0401 10:20:38.807244   12084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:20:38.807244   12084 out.go:304] Setting ErrFile to fd 616...
	I0401 10:20:38.807244   12084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 10:20:38.820254   12084 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0401 10:20:38.831246   12084 out.go:298] Setting JSON to true
	I0401 10:20:38.835246   12084 start.go:129] hostinfo: {"hostname":"minikube6","uptime":309597,"bootTime":1711657241,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:20:38.835246   12084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:20:38.842262   12084 out.go:97] [download-only-452300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:20:38.842262   12084 notify.go:220] Checking for updates...
	I0401 10:20:38.845247   12084 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	W0401 10:20:38.842262   12084 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0401 10:20:38.848257   12084 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:20:38.850249   12084 out.go:169] MINIKUBE_LOCATION=18551
	I0401 10:20:38.853256   12084 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0401 10:20:38.859246   12084 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 10:20:38.860244   12084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:20:44.618339   12084 out.go:97] Using the hyperv driver based on user configuration
	I0401 10:20:44.618339   12084 start.go:297] selected driver: hyperv
	I0401 10:20:44.618339   12084 start.go:901] validating driver "hyperv" against <nil>
	I0401 10:20:44.619035   12084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 10:20:44.687465   12084 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0401 10:20:44.688951   12084 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 10:20:44.688951   12084 cni.go:84] Creating CNI manager for ""
	I0401 10:20:44.688951   12084 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0401 10:20:44.689649   12084 start.go:340] cluster config:
	{Name:download-only-452300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-452300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:20:44.691568   12084 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:20:44.694898   12084 out.go:97] Downloading VM boot image ...
	I0401 10:20:44.694898   12084 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 10:20:48.655323   12084 out.go:97] Starting "download-only-452300" primary control-plane node in "download-only-452300" cluster
	I0401 10:20:48.655323   12084 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0401 10:20:48.694774   12084 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0401 10:20:48.694774   12084 cache.go:56] Caching tarball of preloaded images
	I0401 10:20:48.695560   12084 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0401 10:20:48.701615   12084 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0401 10:20:48.701615   12084 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0401 10:20:48.764222   12084 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0401 10:20:52.813426   12084 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0401 10:20:52.814674   12084 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-452300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-452300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:20:57.842751   11988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3389016s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-452300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-452300: (1.3761249s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (11.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-134000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-134000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=hyperv: (11.7377072s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (11.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-134000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-134000: exit status 85 (295.3322ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-452300 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC |                     |
	|         | -p download-only-452300        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                |                      |                   |                |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC | 01 Apr 24 10:20 UTC |
	| delete  | -p download-only-452300        | download-only-452300 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC | 01 Apr 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-134000 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | -p download-only-134000        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                |                      |                   |                |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:21:00
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:21:00.932589    4268 out.go:291] Setting OutFile to fd 748 ...
	I0401 10:21:00.932589    4268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:21:00.932589    4268 out.go:304] Setting ErrFile to fd 772...
	I0401 10:21:00.932589    4268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:21:00.957641    4268 out.go:298] Setting JSON to true
	I0401 10:21:00.963514    4268 start.go:129] hostinfo: {"hostname":"minikube6","uptime":309619,"bootTime":1711657241,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:21:00.963514    4268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:21:01.060076    4268 out.go:97] [download-only-134000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:21:01.060712    4268 notify.go:220] Checking for updates...
	I0401 10:21:01.063550    4268 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:21:01.066149    4268 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:21:01.068529    4268 out.go:169] MINIKUBE_LOCATION=18551
	I0401 10:21:01.071383    4268 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0401 10:21:01.076414    4268 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 10:21:01.077590    4268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:21:06.790848    4268 out.go:97] Using the hyperv driver based on user configuration
	I0401 10:21:06.790848    4268 start.go:297] selected driver: hyperv
	I0401 10:21:06.790848    4268 start.go:901] validating driver "hyperv" against <nil>
	I0401 10:21:06.790848    4268 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 10:21:06.844829    4268 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0401 10:21:06.845529    4268 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 10:21:06.846108    4268 cni.go:84] Creating CNI manager for ""
	I0401 10:21:06.846108    4268 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:21:06.846234    4268 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 10:21:06.846345    4268 start.go:340] cluster config:
	{Name:download-only-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:21:06.846345    4268 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:21:06.851809    4268 out.go:97] Starting "download-only-134000" primary control-plane node in "download-only-134000" cluster
	I0401 10:21:06.852331    4268 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:21:06.899937    4268 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 10:21:06.899937    4268 cache.go:56] Caching tarball of preloaded images
	I0401 10:21:06.900043    4268 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0401 10:21:06.903401    4268 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0401 10:21:06.903564    4268 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0401 10:21:06.974675    4268 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2fedab548578a1509c0f422889c3109c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0401 10:21:10.234517    4268 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0401 10:21:10.235652    4268 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-134000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-134000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:21:12.597655    4384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (1.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4163963s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (1.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (1.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-134000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-134000: (1.3766183s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (1.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (13.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-373700 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-373700 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=hyperv: (13.7995477s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (13.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-373700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-373700: exit status 85 (438.3832ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-452300 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC |                     |
	|         | -p download-only-452300             |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |                   |                |                     |                     |
	|         | --container-runtime=docker          |                      |                   |                |                     |                     |
	|         | --driver=hyperv                     |                      |                   |                |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC | 01 Apr 24 10:20 UTC |
	| delete  | -p download-only-452300             | download-only-452300 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:20 UTC | 01 Apr 24 10:21 UTC |
	| start   | -o=json --download-only             | download-only-134000 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | -p download-only-134000             |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |                   |                |                     |                     |
	|         | --container-runtime=docker          |                      |                   |                |                     |                     |
	|         | --driver=hyperv                     |                      |                   |                |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| delete  | -p download-only-134000             | download-only-134000 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC | 01 Apr 24 10:21 UTC |
	| start   | -o=json --download-only             | download-only-373700 | minikube6\jenkins | v1.33.0-beta.0 | 01 Apr 24 10:21 UTC |                     |
	|         | -p download-only-373700             |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |                   |                |                     |                     |
	|         | --container-runtime=docker          |                      |                   |                |                     |                     |
	|         | --driver=hyperv                     |                      |                   |                |                     |                     |
	|---------|-------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:21:15
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:21:15.761035    7100 out.go:291] Setting OutFile to fd 576 ...
	I0401 10:21:15.762337    7100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:21:15.762474    7100 out.go:304] Setting ErrFile to fd 580...
	I0401 10:21:15.762474    7100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:21:15.785497    7100 out.go:298] Setting JSON to true
	I0401 10:21:15.788933    7100 start.go:129] hostinfo: {"hostname":"minikube6","uptime":309634,"bootTime":1711657241,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0401 10:21:15.789572    7100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0401 10:21:15.795097    7100 out.go:97] [download-only-373700] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0401 10:21:15.795331    7100 notify.go:220] Checking for updates...
	I0401 10:21:15.800036    7100 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0401 10:21:15.802902    7100 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0401 10:21:15.807661    7100 out.go:169] MINIKUBE_LOCATION=18551
	I0401 10:21:15.809830    7100 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0401 10:21:15.815503    7100 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 10:21:15.816378    7100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:21:21.393525    7100 out.go:97] Using the hyperv driver based on user configuration
	I0401 10:21:21.393730    7100 start.go:297] selected driver: hyperv
	I0401 10:21:21.393823    7100 start.go:901] validating driver "hyperv" against <nil>
	I0401 10:21:21.394181    7100 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 10:21:21.446236    7100 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0401 10:21:21.446960    7100 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 10:21:21.447490    7100 cni.go:84] Creating CNI manager for ""
	I0401 10:21:21.447733    7100 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0401 10:21:21.447814    7100 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 10:21:21.447814    7100 start.go:340] cluster config:
	{Name:download-only-373700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-373700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0401 10:21:21.447814    7100 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:21:21.475821    7100 out.go:97] Starting "download-only-373700" primary control-plane node in "download-only-373700" cluster
	I0401 10:21:21.475821    7100 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0401 10:21:21.519996    7100 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0401 10:21:21.519996    7100 cache.go:56] Caching tarball of preloaded images
	I0401 10:21:21.520421    7100 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0401 10:21:21.525438    7100 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0401 10:21:21.525438    7100 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0401 10:21:21.586502    7100 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:d024b8f2a881a92d6d422e5948616edf -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0401 10:21:24.684954    7100 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0401 10:21:24.684954    7100 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0401 10:21:25.677286    7100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0401 10:21:25.678441    7100 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-373700\config.json ...
	I0401 10:21:25.679408    7100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-373700\config.json: {Name:mk55f4628940b36c3513a5eac265182cf5d10527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:21:25.680824    7100 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0401 10:21:25.681026    7100 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.30.0-beta.0/kubectl.exe
	
	
	* The control-plane node download-only-373700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-373700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:21:29.491698    8936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (1.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3806653s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (1.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (1.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-373700
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-373700: (1.3985775s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (1.40s)

                                                
                                    
x
+
TestBinaryMirror (7.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-729600 --alsologtostderr --binary-mirror http://127.0.0.1:49987 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-729600 --alsologtostderr --binary-mirror http://127.0.0.1:49987 --driver=hyperv: (6.6987321s)
helpers_test.go:175: Cleaning up "binary-mirror-729600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-729600
--- PASS: TestBinaryMirror (7.66s)

                                                
                                    
x
+
TestOffline (300.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-035600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-035600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m18.8220047s)
helpers_test.go:175: Cleaning up "offline-docker-035600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-035600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-035600: (42.1132932s)
--- PASS: TestOffline (300.94s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-852800
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-852800: exit status 85 (275.4708ms)

                                                
                                                
-- stdout --
	* Profile "addons-852800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-852800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:21:44.419655    2168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-852800
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-852800: exit status 85 (291.4544ms)

                                                
                                                
-- stdout --
	* Profile "addons-852800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-852800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 10:21:44.423652    8152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/Setup (398.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-852800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-852800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m38.8162915s)
--- PASS: TestAddons/Setup (398.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (66.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-852800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-852800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-852800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [28e2aa63-4952-4b02-bc14-96ea52fec8cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [28e2aa63-4952-4b02-bc14-96ea52fec8cf] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0143427s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.1111843s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-852800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0401 10:29:56.180242   10636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-852800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 ip: (2.6434439s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.19.148.231
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable ingress-dns --alsologtostderr -v=1: (16.0255588s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable ingress --alsologtostderr -v=1: (22.387981s)
--- PASS: TestAddons/parallel/Ingress (66.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7w8sh" [55fea15c-adde-4820-8c06-4514043d2687] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0194084s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-852800
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-852800: (21.2053765s)
--- PASS: TestAddons/parallel/InspektorGadget (26.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 24.9868ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-9j6db" [c4b50a6b-bd8a-4386-97e6-323009bdc6f8] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0349804s
addons_test.go:415: (dbg) Run:  kubectl --context addons-852800 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable metrics-server --alsologtostderr -v=1: (17.3303771s)
--- PASS: TestAddons/parallel/MetricsServer (22.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (37.24s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 8.0677ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-m88ns" [5136ac7c-84c6-4cd2-9d90-2d625f86676a] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0223769s
addons_test.go:473: (dbg) Run:  kubectl --context addons-852800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-852800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.8136297s)
addons_test.go:478: kubectl --context addons-852800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-852800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-852800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.4750362s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable helm-tiller --alsologtostderr -v=1: (16.5373626s)
--- PASS: TestAddons/parallel/HelmTiller (37.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 27.3687ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3eaf5809-594b-4512-a8b3-3aa700b2d1c2] Pending
helpers_test.go:344: "task-pv-pod" [3eaf5809-594b-4512-a8b3-3aa700b2d1c2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3eaf5809-594b-4512-a8b3-3aa700b2d1c2] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.0178746s
addons_test.go:584: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-852800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-852800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-852800 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-852800 delete pod task-pv-pod: (1.3432329s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-852800 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [59882673-2090-4fd9-b3a5-3bab333ffd6f] Pending
helpers_test.go:344: "task-pv-pod-restore" [59882673-2090-4fd9-b3a5-3bab333ffd6f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [59882673-2090-4fd9-b3a5-3bab333ffd6f] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0155842s
addons_test.go:626: (dbg) Run:  kubectl --context addons-852800 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-852800 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-852800 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (24.3534794s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable volumesnapshots --alsologtostderr -v=1: (17.3564369s)
--- PASS: TestAddons/parallel/CSI (73.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (41.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-852800 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-852800 --alsologtostderr -v=1: (18.7799152s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-9jr89" [89766b69-fb47-4a44-b5cc-ddb4694a8d4e] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-9jr89" [89766b69-fb47-4a44-b5cc-ddb4694a8d4e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-9jr89" [89766b69-fb47-4a44-b5cc-ddb4694a8d4e] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.0128453s
--- PASS: TestAddons/parallel/Headlamp (41.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-hm2f8" [a606068f-a01d-4faf-9bac-271e5b52e75e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0094495s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-852800
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-852800: (17.584836s)
--- PASS: TestAddons/parallel/CloudSpanner (22.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (32.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-852800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-852800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [38e52810-ea30-45d4-8df1-9bfa50d9b3fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [38e52810-ea30-45d4-8df1-9bfa50d9b3fd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [38e52810-ea30-45d4-8df1-9bfa50d9b3fd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0204345s
addons_test.go:891: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 ssh "cat /opt/local-path-provisioner/pvc-772810e3-66c1-4b28-81a8-0348debb99f1_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 ssh "cat /opt/local-path-provisioner/pvc-772810e3-66c1-4b28-81a8-0348debb99f1_default_test-pvc/file1": (10.7380054s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-852800 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-852800 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.1920112s)
--- PASS: TestAddons/parallel/LocalPath (32.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lxk7c" [363a7316-182c-4a25-86ec-89e74ef033c5] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0331654s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-852800
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-852800: (15.5664233s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-xptlz" [7bc46406-63c9-459b-85a1-d8a0a6edbae9] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0130828s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-852800 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-852800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (56.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-852800
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-852800: (42.9257185s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-852800
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-852800: (5.2641242s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-852800
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-852800: (5.101717s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-852800
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-852800: (2.9001649s)
--- PASS: TestAddons/StoppedEnableDisable (56.19s)

                                                
                                    
x
+
TestErrorSpam/start (18.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 start --dry-run: (6.0785747s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 start --dry-run: (6.160834s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 start --dry-run: (6.0713335s)
--- PASS: TestErrorSpam/start (18.32s)

                                                
                                    
x
+
TestErrorSpam/status (38.31s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 status: (13.0986322s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 status: (12.5867082s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 status: (12.6266104s)
--- PASS: TestErrorSpam/status (38.31s)

                                                
                                    
x
+
TestErrorSpam/pause (24.06s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 pause: (8.1698242s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 pause: (7.9997699s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 pause: (7.8833539s)
--- PASS: TestErrorSpam/pause (24.06s)

                                                
                                    
x
+
TestErrorSpam/unpause (24.1s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 unpause: (8.0620835s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 unpause: (8.0123643s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 unpause: (8.0261108s)
--- PASS: TestErrorSpam/unpause (24.10s)

                                                
                                    
x
+
TestErrorSpam/stop (63.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 stop
E0401 10:38:23.424174    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 10:38:51.251049    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 stop: (40.7304068s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 stop: (11.6424072s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-189500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-189500 stop: (11.2959752s)
--- PASS: TestErrorSpam/stop (63.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\1260\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (218s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-706500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-706500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m37.9872852s)
--- PASS: TestFunctional/serial/StartWithProxy (218.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (348.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 cache add registry.k8s.io/pause:3.1
E0401 10:53:23.435570    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 cache add registry.k8s.io/pause:3.1: (1m47.2533578s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 cache add registry.k8s.io/pause:3.3: (2m0.5038369s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 cache add registry.k8s.io/pause:latest: (2m0.4912473s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (348.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-706500 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3829619192\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-706500 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3829619192\001: (1.9489513s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 cache add minikube-local-cache-test:functional-706500
E0401 10:58:23.435095    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-706500 cache add minikube-local-cache-test:functional-706500: (58.3567327s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-706500 cache delete minikube-local-cache-test:functional-706500
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-706500
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.55s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-706500
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-706500: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:functional-706500" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-706500": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-706500
functional_test.go:197: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-706500: context deadline exceeded (0s)
functional_test.go:199: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-706500": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-706500
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-706500: context deadline exceeded (0s)
functional_test.go:207: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-706500": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (855.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-401500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0401 11:23:06.665642    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 11:23:23.445237    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 11:28:23.442343    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 11:33:23.457721    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-401500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (13m36.6518966s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 status -v=7 --alsologtostderr: (38.9269503s)
--- PASS: TestMultiControlPlane/serial/StartCluster (855.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-401500 -- rollout status deployment/busybox: (3.5389125s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-f5xk7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-f5xk7 -- nslookup kubernetes.io: (1.9793194s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-gr89z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-gr89z -- nslookup kubernetes.io: (1.6887985s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-q7xs6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-f5xk7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-gr89z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-q7xs6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-f5xk7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-gr89z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-401500 -- exec busybox-7fdf7869d9-q7xs6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (268.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-401500 -v=7 --alsologtostderr
E0401 11:38:23.457163    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-401500 -v=7 --alsologtostderr: (3m36.7288692s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-401500 status -v=7 --alsologtostderr
E0401 11:39:46.680900    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-401500 status -v=7 --alsologtostderr: (51.4786267s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (268.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-401500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (30.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (30.4785413s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (30.48s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (211.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-891700 --driver=hyperv
E0401 11:56:26.700387    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-891700 --driver=hyperv: (3m31.4014517s)
--- PASS: TestImageBuild/serial/Setup (211.40s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-891700
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-891700: (10.0799055s)
--- PASS: TestImageBuild/serial/NormalBuild (10.08s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-891700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-891700: (9.4519292s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.45s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-891700
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-891700: (8.2441227s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-891700
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-891700: (8.1497907s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.15s)

                                                
                                    
x
+
TestJSONOutput/start/Command (222.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-924000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0401 11:58:23.462451    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-924000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m42.1852901s)
--- PASS: TestJSONOutput/start/Command (222.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.4s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-924000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-924000 --output=json --user=testUser: (8.4032149s)
--- PASS: TestJSONOutput/pause/Command (8.40s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.35s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-924000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-924000 --output=json --user=testUser: (8.3536882s)
--- PASS: TestJSONOutput/unpause/Command (8.35s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (35.56s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-924000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-924000 --output=json --user=testUser: (35.5619191s)
--- PASS: TestJSONOutput/stop/Command (35.56s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.64s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-230200 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-230200 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (318.7107ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"46e6ea9a-be5b-4383-9cd0-4075a3442c94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-230200] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"614933ac-0931-41ec-8c5b-086e085214d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"594d950f-ede0-4ad5-a734-54acd3a370f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"34042316-bd7e-4841-aadc-d5b4ff503785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"26176248-7acd-44a2-9579-25b8be2c555d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18551"}}
	{"specversion":"1.0","id":"097b3883-8953-44ec-928e-83875c1b283a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b198d337-edf5-4052-95d3-ae79298b7a70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 12:03:07.414054   12852 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-230200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-230200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-230200: (1.3176533s)
--- PASS: TestErrorJSONOutput (1.64s)

                                                
                                    
x
+
TestMainNoArgs (0.26s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.26s)

                                                
                                    
x
+
TestMinikubeProfile (570.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-871400 --driver=hyperv
E0401 12:03:23.455952    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-871400 --driver=hyperv: (3m32.6855708s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-871400 --driver=hyperv
E0401 12:08:23.471395    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-871400 --driver=hyperv: (3m31.3102493s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-871400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.6381743s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-871400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.4594821s)
helpers_test.go:175: Cleaning up "second-871400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-871400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-871400: (53.7628594s)
helpers_test.go:175: Cleaning up "first-871400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-871400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-871400: (46.596403s)
--- PASS: TestMinikubeProfile (570.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (164.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-137000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0401 12:13:06.715208    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 12:13:23.460699    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-137000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m43.2442092s)
--- PASS: TestMountStart/serial/StartWithMountFirst (164.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-137000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-137000 ssh -- ls /minikube-host: (10.0917423s)
--- PASS: TestMountStart/serial/VerifyMountFirst (10.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (165.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-218100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-218100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m44.4673948s)
--- PASS: TestMountStart/serial/StartWithMountSecond (165.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (10.13s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-218100 ssh -- ls /minikube-host
E0401 12:18:23.474225    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-218100 ssh -- ls /minikube-host: (10.1338752s)
--- PASS: TestMountStart/serial/VerifyMountSecond (10.13s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (29.07s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-137000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-137000 --alsologtostderr -v=5: (29.0735317s)
--- PASS: TestMountStart/serial/DeleteFirst (29.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (10.16s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-218100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-218100 ssh -- ls /minikube-host: (10.1601962s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (10.16s)

                                                
                                    
x
+
TestMountStart/serial/Stop (31.89s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-218100
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-218100: (31.8891274s)
--- PASS: TestMountStart/serial/Stop (31.89s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (125.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-218100
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-218100: (2m4.0200947s)
--- PASS: TestMountStart/serial/RestartStopped (125.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.89s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-218100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-218100 ssh -- ls /minikube-host: (9.8891756s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.89s)

                                                
                                    
x
+
TestScheduledStopWindows (350.05s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-109000 --memory=2048 --driver=hyperv
E0401 13:03:06.768636    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0401 13:03:23.492161    1260 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-109000 --memory=2048 --driver=hyperv: (3m32.7572296s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-109000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-109000 --schedule 5m: (11.512594s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-109000 -n scheduled-stop-109000
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-109000 -n scheduled-stop-109000: exit status 1 (10.0247705s)

                                                
                                                
** stderr ** 
	W0401 13:05:10.778538    7212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-109000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-109000 -- sudo systemctl show minikube-scheduled-stop --no-page: (10.3085532s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-109000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-109000 --schedule 5s: (11.5458176s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-109000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-109000: exit status 7 (2.6120292s)

                                                
                                                
-- stdout --
	scheduled-stop-109000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:06:42.664642   13968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-109000 -n scheduled-stop-109000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-109000 -n scheduled-stop-109000: exit status 7 (2.5497585s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:06:45.276087    1148 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-109000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-109000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-109000: (28.7267053s)
--- PASS: TestScheduledStopWindows (350.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-035600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-035600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (411.2152ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-035600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0401 13:07:16.575775   12680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                    

Test skip (22/146)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard